text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
One nice thing about Haskell is Quickcheck, if you're not familiar with Haskell or Quickcheck it works like this:
Prelude> let evens xs = [ x | x <- xs, x `mod` 2 == 0 ] Prelude> evens [1..10] [2,4,6,8,10]
Here we define a function that takes a list of numbers and returns only the even numbers. We can use Quickcheck to test this function:
Prelude> import Test.QuickCheck Prelude Test.QuickCheck> let test_evens xs = [ x | x <- evens xs, x `mod` 2 /= 0 ] == [] Prelude Test.QuickCheck> quickCheck test_evens +++ OK, passed 100 tests.
Here we define a test function that asserts that a list of even numbers
shouldn't contain any odd ones. Passing this function to Quickcheck
shows that the function passed 100 tests. Quickcheck sees that our
test_evens function takes a list of numbers and returns a boolean, and
it tests our code by generating a set of random inputs and executing it
with those inputs.
Clearly this type of testing can be very useful, and not just for Haskell programs. Theft is a C library that brings Quickcheck style testing to C. This sort of testing is known as Property Testing.
Let's reimplement our Haskell code in C,
#include <stdio.h> #include <stdlib.h> #include <theft.h> struct IntArray { int len; int arr[]; }; struct IntArray *evens(struct IntArray *input) { int nevens = 0; struct IntArray *output; if (input == NULL) { return NULL; } output = malloc(sizeof (struct IntArray) + input->len * sizeof(int)); if (output == NULL) { return NULL; } for (int i = 0; i < input->len; i++) { if (input->arr[i] % 2 == 0) { output->arr[nevens++] = input->arr[i]; } } output->len = nevens; return output; }
Here we define a function that takes an array of integers and outputs an array of even integers.
Now let's do some testing!
Since C is not strongly typed like Haskell we need to define a function that describes what a test input should look like. Providing this information to Theft will allow it to generate real test input data. The function should have the following prototype,
enum theft_alloc_res allocate_int_array(struct theft *, void *, void **)
Let's write it!
enum theft_alloc_res allocate_int_array(struct theft *t, void *data, void **result) { int SIZE_LIMIT = 100; int size = theft_random_choice(t, SIZE_LIMIT); struct IntArray *numbers = malloc(sizeof (struct IntArray) + size * sizeof(int)); if (numbers == NULL) { return THEFT_ALLOC_ERROR; } for (int i = 0; i < size; i++) { numbers->arr[i] = theft_random_choice(t, INT_MAX); } numbers->len = size; *result = numbers; return THEFT_ALLOC_OK; }
theft_random_choice is a function that will pick a random number
between 0 and some defined limit. The result is not truly random, but
instead based on the complexity of the input Theft requires. The
documentation for Theft points out that the main thing with this is to
ensure that wherever
theft_random_choice returns 0 our
alloc_int_array function should return the simplest input possible, in
our case that would be an empty array.
Theft passes a reference pointer to the
alloc_int_array function, this
must be updated to point to the array we have allocated before the
function returns with
THEFT_ALLOC_OK. In the event of some kind of
error the function should return
THEFT_ALLOC_ERROR
Next we write the property function, this function takes an input
array of integers generated by Theft, runs our
evens function over that input and
asserts that the resultant output doesn't contain any odd numbers.
enum theft_trial_res property_array_of_evens_has_no_odd_numbers(struct theft *t, void *test_input) { struct IntArray *test_array = test_input; struct IntArray *result = evens(test_array); // Array of even numbers should not contain any odd numbers for (int i = 0; i < result->len; i++) { if (result->arr[i] % 2 != 0) { return THEFT_TRIAL_FAIL; } } return THEFT_TRIAL_PASS; }
Putting this together, we define some boiler plate to cover the various functions we just defined for generating test inputs,
struct theft_type_info random_array_info = { .alloc = allocate_int_array, .free = theft_generic_free_cb, .autoshrink_config = { .enable = true, } };
The
alloc member is updated to point to the function we just defined.
Since the test inputs are dynamically allocated with
malloc they will need
to be freed later on. Theft provides a generic function for freeing
which is sufficient for our purposes:
theft_generic_free_cb.
The last member of this structure needs more explanation. If Theft encounters an input which causes the test to fail, it will try to pare down the input to the smallest input that causes failure; this is called shrinking.
Theft lets you define a function that can provide some control over the
shrinking process, or it can use its own shrinking functions:
autoshrinking. If autoshrinking is used however, the function that
allocates test inputs must base the complexity of the input it generates
upon the result of one of the
theft_random functions, such as
theft_random_bits, or
theft_random_choice. This is why our
alloc_int_array function uses
theft_random_choice rather than
standard pseudo random number generating functions.
Finally we write a function to execute the tests,
int main(void) { theft_seed seed = theft_seed_of_time(); struct theft_run_config config = { .name = __func__, .prop1 = property_array_of_evens_has_no_odd_numbers, .type_info = { &random_array_info }, .seed = seed }; return (theft_run(&config) == THEFT_RUN_PASS) ? EXIT_SUCCESS : EXIT_FAILURE; }
Compiling and running:
$ gcc -o test test.c -ltheft $ ./test == PROP 'main': 100 trials, seed 0x62a401b7fa52ac8b ....................................................................... ............................. == PASS 'main': pass 100, fail 0, skip 0, dup 0
I hope this helps anyone looking to try out Property Testing in C. Another guide that might be useful can be found here, it has an example that uses a manually defined shrinking function, which may be useful for more complex situations. | https://yakking.branchable.com/posts/property-testing-in-c/ | CC-MAIN-2020-40 | refinedweb | 883 | 59.23 |
Hello,
First off I would like to clearify that i'm terrible at this and have no idea what im doing
Anyhow, i would like to make a maze game with accelerometer. I have managed to make a ball within a certain area with this code:
import flash.events.Event;
var accelX:Number;
var accelY:Number;
var fl_Accelerometer:Accelerometer = new Accelerometer();
fl_Accelerometer.addEventListener(AccelerometerEvent.UPDATE, fl_AccelerometerUpdateHandler);
function fl_AccelerometerUpdateHandler(event:AccelerometerEvent):void
{
accelX = event.accelerationX;
accelY = event.accelerationY;
}
ball.addEventListener(Event.ENTER_FRAME, moveBall);;
}
}
Now the problem is that i would like to make some walls within this maze, some restricted areas if i may, so it becomes an actually maze. Have tried and failed several times and have only managed to make a but not limit it. so it just cuts the screen in half etc.
if each wall is a rectangle, you can simplify your coding by creating a parent for the walls and adding rectangular movieclip walls to that parent.
you could then loop through all the children of that parent checking a hitTestObject between your and the child walls. if it's negative, store the ball x and y. if positive, re-assign the ball x and y to the previously stored values.
Tried to do as you said, but as i earlier declared. I'm terrible, so the follow-up question is: What's a parent and children etc? Il understand if you wont be bothered to help me, as it may seem useless
ty for the reply though
edit:
Would something like this be useful?
if(ball.hitTestObject(wall)){
ball.y = 50
As the ball is my moving character (movieclip) and the "wall" is also a movieclip but not moving ofc?
create your maze. convert each rectangle in the maze to a movieclip and assign an instance name (eg, m1,m2,...,m44). don't convert any L-shaped, U-shaped or rotated L-shaped, U-shaped etc sections. only rectangular blocks.
you can then use:
var prevX:int;
var prevY:int;
var i:int;
var mP:Sprite=new Sprite();
addChild(mP);
for(i=1;i<=44,i++){
mP.addChild(this["m"+i]);
};
}
checkWallF();
}
function checkWallF():void{
for(i=0;i<mP.numChildren;i++){
if(ball.hitTestObject(mP.getChildAt(i))){
ball.x=prevX;
ball.y=prevY;
break;
}
prevX=ball.x;
prevY=ball.y
}
}
}
Im very sorry mr kglad. But it seems like I have fallen of the waggon once again.I started all over, creating the walls within an own layer and converterted each one rectangular wall to a movieclip naming them m1 to m24.
But the actionscrip seems to get bugged as i try to publish. Or i get some error messages
Don't know if you ment me to use simply the last one you posted or combine them somehow.
Anyhow if I use the last one I get these errors:
and if i try to fix them several other errors joins the party.
is it ment for me to swap something out from the code, amount of walls, wall names etc?
Again really, thanks for all the support. Wish I knew this a bit better, I am going to take some courses next year at school. Maybe i'll get it then.
this line:
for(i=1;i<=44,i++){
should b:
for(i=1;i<=44;i++){
Hmm yes,
Am I wrong or is there one to many}?
Either way I'm left with tons of errors. I think I'll put this project on ice for now.
One last thing, do you know of any tutorials or online classes, preferably free?
Thanks for your attempt though.
hmm alright, I must have screwed up somewhere. Anyhow il check up on it as i get home, my summerhouse dont have internett -.- bummer
thanks for your help though
you're welcome.
Hi again, Kglad. I've downloaded this flash file with code for the maze (i too am keen to make a maze), but when i test the movie, the ball fails to move.
Am i missing something?
are you testing on an iDevice?
not at all, am testing on a big pc. And, BTW, when i test the Flash file, an xml file is created. Is that supposed to happen?
check the publish settings. you're publishing for iOS, that's why flash is creating the xml.
also, that code uses the accelerometer to move the ball.
you need to change both to test ball movement in that file.
Hi. U are right, of course. I got that code and flash file from the above link, and it is set up for mobile devices. As i understand, Accelometer is for mobile devices, and while i changed the publish settings to use Flash as the Player (instead of Air for ios which was how the file was set up), i am wondering if it is possible to use this code as such, on a big PC, not mobile device.
And, i dont know how to change the accelemeter in hte publish settings....
thank you.
yes, but instead of using the accelerometer, use something else (like a keyboardevent listener) to move the ball.
Hey i got around to test your application now. It was set on android for Ios, but i simply swapped this in settings to air for android. Since that's what im working on
, but when i tested the app only one wall appear to be working, the very left one, m1. I find this rather odd since i see no difference on any of the rectangels. Did you test your app, was it fully functioning?
Hi Kglad
I have downloaded
I'm having the same problem as Tixy where just one wall of the left of the maze.fla file has collision, the ball MC just passes through the other walls in the project, is there something I need to change with the checkwall function?
Rgeards
Sam | http://forums.adobe.com/message/4540228?tstart=0 | CC-MAIN-2013-48 | refinedweb | 985 | 74.19 |
There are lots of reasons why we might want to draw a line or circle on our charts. We could look to add an average line, highlight a key data point or even draw a picture. This article will show how to add lines, circles and arcs with the example of a football pitch map that could then be used to show heatmaps, passes or anything else that happens during a match.
This example works with FIFA’s offical pitch sizes, but you might want to change them according to your data/sport/needs. Let’s import matplotlib as normal, in addition to its Arc functionality.
import matplotlib.pyplot as plt from matplotlib.patches import Arc
Drawing Lines
It is easiest for us to start with our lines around the outside of the pitch. Once we create our plot with the first two lines of our code, drawing a line is pretty easy with ‘.plot’. You have probably already seen ‘.plot’ used to display scatter points, but to draw a line, we just need to provide two lists as arguments and matplotlib will do the thinking for us:
- List one: starting and ending X locations
- List two: starting and ending Y locations
Take a look at the code and plot below to understand our outlines. Use the colour guides to see how they are plotted with start and end point lists.
fig=plt.figure() ax=fig.add_subplot(1,1,1) plt.plot([0,0],[0,90], color="blue") plt.plot([0,130],[90,90], color="orange") plt.plot([130,130],[90,0], color="green") plt.plot([130,0],[0,0], color="red") plt.plot([65,65],[0,90], color="pink") plt.show()
Great job! Matplotlib makes drawing lines very easy, it just takes some clear thinking with start and end locations to get them plotted.
Drawing Circles
Next up, we’re going to draw some circles on the pitch. Primarily, we need a centre circle, but we also need markers for the centre and penalty spots.
Adding circles is slightly different to lines. Firstly, we need to assign our circles to a variable. We use ‘.circle’ to do this, passing it two essential arguments:
- X/Y coordinates of the middle of the circle
- Radius of the circle
For our circles, we’ll also assign colour and fill, but these are optional.
With these circles assigned then use ‘.patch’ to draw the circle to our plot.
Take a look at our code below:
") #Assign circles to variables - do not fill the centre circle! centreCircle = plt.Circle((65,45),9.15,color="red",fill=False) centreSpot = plt.Circle((65,45),0.8,color="blue") #Draw the circles to our plot ax.add_patch(centreCircle) ax.add_patch(centreSpot) plt.show()
Drawing Arcs
Now that you can create circles, arcs will be just as easy – we’ll need them for the lines outside the penalty area. While they take a few more arguments, they follow the same pattern as before. Let’s go through the arguments:
- X/Y coordinates of the centrepoint of the arc, assuming the arc was a complete shape.
- Width – we must pass width and height as the arc might not be a circle, it might instead be from an oval shape
- Height – as above
- Angle – degree rotation of the shape (anti-clockwise)
- Theta1 – start location of the arc, in degrees
- Theta2 – end location of the arc, in degrees
That’s a few more arguments than for the circle and lines, but don’t let that make you think that this is too much more complicated. Our code will look like this for one arc:
leftArc = Arc((11,45),height=18.3,width=18.3,angle=0,theta1=310,theta2=50)
All that we need to do after this is draw the arc to our plot, just like with the circles:
ax.add_patch(leftArc)
You can see this in action below:
#Demo Arcs ") #Centre Circle/Spot centreCircle = plt.Circle((65,45),9.15,fill=False) centreSpot = plt.Circle((65,45),0.8) ax.add_patch(centreCircle) ax.add_patch(centreSpot) #Create Arc and add it to our plot leftArc = Arc((11,45),height=18.3,width=18.3,angle=0,theta1=310,theta2=50,color="red") ax.add_patch(leftArc) plt.show()
Bringing everything together
The code below applies the above lines, cricles and arcs to a function for quick and easy use. The only new line removes our axes:
plt.axis(‘off’)
Take a look through our function belong and follow what we are doing. Feel free to take this and use it as the base for your own plots!
def createPitch(): ') #Display Pitch plt.show() createPitch()
Summary
In our article, we’ve seen how to draw lines, arcs and circles in Matplotlib. You’ll find this useful when trying to add the finishing touches with annotations to any plot. These tools are equally important when drawing a map on which we will plot our data – like our pitchmap example here.
Take a look at our other visualisation articles here and be sure to get in touch with us on Twitter! | https://fcpython.com/visualisation/drawing-pitchmap-adding-lines-circles-matplotlib | CC-MAIN-2018-51 | refinedweb | 847 | 72.56 |
Scala developers might have heard of “type lambdas”, a fairly horrendous-looking construction that sometimes appears in code using higher-kinded types. What is a type lambda, why would we ever want to use one, and even better, how can we avoid having to write them? In this blog post we tackle these questions.
The need for type lambdas usually arises
when dealing with higher-kinded types.
These are type constructors that take other types
(or even other type constructor) as parameter.
A familiar example is
List.
List by itself is not a type.
You need to provide a parameter as
Int
to create a type such as
List[Int].
We call
List a type constructor.
When we declare a type variable that is a type constructor,
we write
F[_] to indicate that
F needs
to be provided with a type to construct a concrete type.
Type aliases
Scala developers will be familiar with declaring type aliases:
type L = List[(Option[(Int,Double)])]
We can use
L in any place
where we can use the unwieldy expression on the right side.
We can also also declare type aliases that take parameters:
type T[A] = Option[Map[Int, A]] val t: T[String] = Some(Map(1 -> "abc", 2 -> "xyz"))
Often type aliases are simply used as a convenience, but sometimes their use is required as in the following example.
Let’s define a type parameterised on another type constructor
(we’ll call it
Functor to link to a real-world example,
but could be anything else of the same kind—don’t let the name confuse you).
We’ll see what parameters we are allowed to use with it.
Remember that it is expecting a type constructor with one parameter:
trait Functor[F[_]] type F1 = Functor[Option] // OK type F2 = Functor[List] // OK type F3 = Functor[Map] // !! // error: Map takes two type parameters, expected: one // type fo = Functor[Map] // ^
The compiler error message indicates the problem:
Map takes two type parameters (
Map[K,V])
while the type parameter to
Functor expects one.
Type aliases are often used to
‘partially apply’ a type constructor
and so to ‘adapt’ the kind of the type to be used:
type IntKeyMap[A] = Map[Int, A] type F3 = Functor[IntKeyMap] // OK
IntKeyMap now takes a single type parameter,
and the compiler is happy with that.
This works fine,
but can we achieve the same goal
without having to declare an alias?
We could try to mirror the syntax of partially-applied
value-level functions, with the underscore syntax, as in:
val cube = Math.pow(_: Double, 3) // cube: Double => Double cube(2) // 8
But this syntax doesn’t do the same thing with types:
type F4 = Functor[Map[Int, _]] // error: Map[Int, _] takes no type parameters, expected: one // type F4 = Functor[Map[Int, _]] // ^
Scala uses the underscore in different (one could say inconsistent) ways depending on the context. In this case (in the right hand side of the type alias definition) what is implied is not partial application at all, but rather “I don’t care what this type is”. This is known as an existential type if you want to read up further.
There is, in fact, currently no direct syntax for partial application of type constructors in Scala.
Type lambdas
We can solve this problem of partially applying types by using a type lambda. Let’s return to our example:
({ type T[A] = Map[Int, A] })#T
The heart of the expression above appears to be exactly the same as declaring a type alias, but can be used inline:
type F5 = Functor[({ type T[A] = Map[Int, A] })#T] // OK
We can read the type lambda syntax as:
declaring an anonymous type,
inside of which we define the desired type alias,
and then accessing its type member with the
# syntax.
It’s easy to argue that the construction above is more offensive to the eye than using an extra line to declare a type alias the traditional way. Indeed we recommend using a type alias whenever possible to keep your code clean and readable. However, sometimes type lambdas are unavoidable. Consider the following rather abstract example:
def foo[A[_, _], B](functor: Functor[A[B, ?]]) // won't compile
This is not valid Scala,
but it is the quickest way to convey the intention.
Imagine that the
? behaves like the
partial type constructor application we mentioned earlier,
leaving the
functor argument as
a single unspecified type parameter.
Can we resolve the situation using a separate type alias?
type AB[C] = ... // what should we put here? def foo[A[_,_], B](functor: Functor[AB])
The answer is no, because at the time at which we define
AB,
we don’t have
A and
B available.
Attempts to ‘pass them in’ as parameters like this:
type T[A, B, C] = ...
defeat the purpose because they alter the type arity.
We needed an arity of 1 to pass into
foo
but now we’ve just increased it to 3,
which is clearly not going to go down with the compiler.
Alternative encodings
What are the possible ways of implementing our
Functor example?
Use a type lambda
We can use the type lambda after all:
def foo[A[_,_],B](functor: Functor[({type AB[C] = A[B,C]})#AB])
This works because the types
A and
B
are available in the scope when we define
AB.
Declare a surrounding class
If we prefer not to use type lambdas
we can split the definition of
foo in two:
class Foo[A[_,_],B] { type AB[C] = A[B, C] def apply(functor: Functor[AB]) = ... } def foo[A[_,_],B] = new Foo[A,B]
Like type lambdas, this technique allows us to
define
AB once
A and
B are already known.
However, this approach is verbose
and causes allocations at run time,
whereas the type lambda exists only within the compiler.
Curried type constructors
A third hypothetical solution, that is not currently possible but would fix this issue cleanly, is to use curried type constructors. These are similar to the partially applied type constructors we hypothesised earlier.
Just as we can have multiple argument lists for methods:
def fill(n: Int)(elem: Double) = ... val fill10 = fill(10) _ // fill10: Double => List[Double] fill10(5.1) // List(5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1)
so we could, in principle, have the same at the type level:
type AB[A, B][C] = A[B, C]
We could then ‘partially apply’ this
(with the first argument list only:
AB[A, B]),
leaving behind the arity-1 type constructor we require:
type AB[A, B][C] = A[B, C] // (not valid syntax yet!) def foo[A[_, _], B](functor: Functor[AB[A,B]])
While this approach is currently fantasy, there have been rumours about the introduction of curried or partially applied types in Dotty.
Kind projector
While we wait for curried type constructors to become part of the language, we can find another solution in the kind projector compiler plugin.
Kind projector provides a clearer syntax for type lambdas. For example, we can implement our functor from above as follows:
type F = Functor[Map[Int, ?]] // now works! def foo[A[_, _], B](functor: Functor[A[B, ?]]) // now works!
With this we get as close as we can
to our initial aim of writing types
as if we were partially applying type constructors.
The only difference is that we use
? to do it instead of
_,
which already has too many uses in Scala.
During compilation, kind projector translates
type expressions containing
?
into regular type lambdas,
giving us the same semantics
with a large gain in readability.
Kind projector doesn’t work in all cases,
but for the most part it makes our lives a lot simpler!
Conclusions
Type lambdas are an ugly but necessary concept in Scala for the time being. We can usually use a type alias to avoid having to write a type lambda. In many cases that we cannot use an alias, the kind projector compiler plugin will usually solve the problem.
For more discussion of type lambdas, see this blog post by Adil Akhter.
For more information about kind projector, see the project page on Github. | https://underscore.io/blog/posts/2016/12/05/type-lambdas.html | CC-MAIN-2021-49 | refinedweb | 1,390 | 59.94 |
SimonPJ: >. [Note: in all of the following I am talking about the C-specific part of the ffi spec. Other languages (Java, .net, etc.) treat namespaces differently and would have different stuff in their language-specific parts.] This touches on the subject of static/dynamic linking namespaces which was the cause of Warren Burton's recent Darwin problem (on the Hugs list). With a single namespace (such as that provided by the Unix linker ld or the Unix dynamic linker dlopen(...,RTLD_GLOBAL|...)), there can only be one symbol with each name. This makes it impossible to have two libraries with identically named symbols. It probably doesn't matter whether you create a Haskell binding for the multiply defined symbols - their mere existence is enough to hose you. Conflicts between symbols in the library and the main program (e.g., GHCi or Hugs) will also hose you. I believe this is what GHC uses (because it is what ld does) and it may also be what GHCi uses. With a per-library namespace (such as that provided by dlopen when you don't specify RTLD_GLOBAL), each module has its own namespace and so different libraries can have overlapping names. It is probably still possible to have conflicts between symbols in the main program and symbols in the library. This is what Hugs/GreenCard and Hugs/FFI uses and is what you need to do to make Simon's example work. Obviously Hugs is doing the right thing. Well, no, maybe not... Multiple instantiation of a module in Haskell makes little difference at runtime, but multiple instantiation of a C library makes a huge difference to C code. Consider these three C files: A.c: extern int C; int inc() { return C++; } B.c: extern int C; int dec() { return --C; } C.c: int C = 0; And suppose we have a separate Haskell module (A.hs, B,hs, C,hs) corresponding to each of these files. With a single namespace, loading each of these Haskell modules results in the correct behaviour: the variable incremented by inc is the same variable decremented by dec is the same variable exported by C. With separate namespaces, the only way to avoid undefined symbols is to build the following combinations: A.hs + A.c + C.c B.hs + B.c + C.c C.hs + C.c This is absolutely not what we wanted through because now one variable has become three separate variables and inc modifies a different variable from dec. If we put all three modules into a single package, we might conceivably avoid this problem but the problem will then come up again between packages. For example, the xlib package probably uses errno but errno is in the libc package. We certainly don't want to duplicate errno and we certainly don't want to merge the two packages into one. In conclusion, C is designed to use a single global namespace and things break if you try to change that. Hence, I don't think the ffi for C can allow C libraries to export overlapping names. I think the ffi spec should explicitly say that all C libraries are loaded into a single global namespace. And I don't think the square bracket part can be treated like a Haskell qualified name. I intend to change Hugs/FFI to match GHC's behaviour. Fortunately, the only change required is to specify RTLD_GLOBAL when calling dlopen and that a symbol currently called 'initModule' in the ffi-generated code will be called 'init_Foo_Bar_Baz' instead if this is the code for a Haskell module called Foo.Bar.Baz. -- Alastair Reid reid at cs.utah.edu ps Note that if anyone really, really wants a local namespace, they can foreign-import the dlopen interface and code it up themselves usnig foreign import dynamic. pps If anyone has irreconcilable name conflicts between C libraries, I have a handy tool for renaming symbols in ELF binaries. It was developed as part of a project to add module-local namespaces to C. | http://www.haskell.org/pipermail/ffi/2002-June/000523.html | CC-MAIN-2014-15 | refinedweb | 673 | 62.98 |
- 21 Jan, 2015 1 commit
This patch allows to reuse vstr memory when creating str/bytes object. This improves memory usage. Also saves code ROM: 128 bytes on stmhal, 92 bytes on bare-arm, and 88 bytes on unix x64.
- 16 Jan, 2015 1 commit
See issue #699.
- 01 Jan, 2015 1 commit
Addresses issue #1022.
- 10 Dec, 2014 1 commit
- 03 Oct, 2014 1 commit
Addressing issue #50.
- 25 Sep, 2014 1 commit
It seems most sensible to use size_t for measuring "number of bytes" in malloc and vstr functions (since that's what size_t is for). We don't use mp_uint_t because malloc and vstr are not Micro Python specific.
- 26 Jun, 2014 2 commits
- Chris Angelico authored
- 21/.
- 31 Mar, 2014 1 commit
- 17 Mar, 2014 1 commit
- 15 Mar, 2014 2 commits
- 12 Feb, 2014 1 commit
- 06 Feb, 2014 1 commit
- 22 Jan, 2014 1 commit
-()).
- 06 Jan, 2014 2 commits
- 03 Jan, 2014 1 commit
import works for simple cases. Still work to do on finding the right script, and setting globals/locals correctly when running an imported function.
- 29 Dec, 2013 1 commit
- 03 Nov, 2013 1 commit
- 23 Oct, 2013 1 commit
- 20 Oct, 2013 1 commit | https://gitrepos.estec.esa.int/taste/uPython-mirror/-/commits/0b9ee86133a2a0524691c6cdac209dbfcb3bf116/py/vstr.c | CC-MAIN-2022-33 | refinedweb | 205 | 81.22 |
This is the third chapter of the Generate Random Content For SharePoint series. This time, detailing the challenges you might face when you want to create Excel workbooks. Chapters:
- Generate random content for SharePoint – About how to create big files for upload testing.
- Generate random content for SharePoint 2 – About how to generate Word files for Search and integration testing.
- Generate random content for SharePoint 3 – About how to generate Excel files for Search and integration testing. (This article.)
- Generate random content for SharePoint 4 – About how to generate PowerPoint files for Search and integration testing.
(If you don’t want to read through all the drama, jump directly to the Falling Action section.)
Exposition
If you've been following my previous posts, by now you should know how to generate unique, big files for upload test, and Word documents with random words to test the indexing functionality. In this post I'll show you the challenges you might face when creating Excel files.
Rising Action
Act 1
An Excel table is relatively simple with column- and row headers and some numbers. Unfortunately we do not have the "Lorem ipsum" generator as in Word, but in the previous section we already discussed that this would not be good anyway. We could however use the methods learned in the previous post; get the header colums' content from a dictionary and use a simple random generator for the numbers.
# Generate the first line with the headers For ($Column = 1; $Column -le $Columns; $Column++) { $RandomWordLine = Get-Random -Minimum 1 -Maximum $DictionaryFileRows $RandomWord = $DictionaryFileContent[$RandomWordLine] $FileStream.Write("`t$RandomWord") } # And the rows with the content For ($Row = 1; $Row -le $Rows; $Row++) { $FileStream.Write("`r`n") # New line $RandomWordLine = Get-Random -Minimum 1 -Maximum $DictionaryFileRows $RandomWord = $DictionaryFileContent[$RandomWordLine] $fileStream.Write("$RandomWord`t") # Row first column For ($Column = 1; $Column -le $Columns; $Column++) { $NumberTXT = [string](Get-Random -Minimum 1 -Maximum 10000) + "`t" $FileStream.Write($NumberTXT) # The actual numbers in the table } }
I hope you found the "`t" and "`r`n" directives interesting. This is how you tell PowerShell to put in a Tab and a new row entry. Also, because we want to use the search refiners, we update the Creator and Las Modified By fields just as we learned earlier. Cheesy easy. Yeah... Not quite.
Act 2
As I've said earlier, while using a Office COM Objects from PowerShell is possible, the performance is not the best. The same applies to Excel as well, of course. Imagine you have to create a table with 100 columns and 1000 rows. This is how long it would take:
Even if we have to do the conversion from TXT to XLSX, we will be well below how long it takes with the COM Object. I guess it's a no-brainer what we are going to choose. With Word documents the method was simple: we generated a long string, then pasted it into a document, saved the document as some name, and that was it. It would be good to use this with Excel, wouldn't it? Well... Yes, it would.
Climax
Act 3
Tiny little problem, that Excel does not work as Word does. There's no Selection object, so channeling information directly into an Excel file directly is not possible. Why is this a problem? Two reasons:
- If we do not have a template Excel file, then we have to create TXT files, then convert them into XLSX. Not a big burden, but it has a little performance impact.
$ExcelWorkBook = $ExcelApplication.Workbooks.Open($TempTXT) $ExcelWorkBook.SaveAs($ExcelFile, [Microsoft.Office.Interop.Excel.XlFileFormat]::xlWorkbookDefault) $ExcelWorkBook.Saved = $true $ExcelWorkBook.Close()
- If we do have a template file, then we have to copy-paste the content into the file. Yes, copy-paste. It is working, but if you want to use the machine while the files are being generated, then it might cause some problems.
$TempDocument = $ExcelApplication.Workbooks.Open($TempTXT,$null,$true) $TempSheet = ($TempDocument.Sheets)[1] $CopiedContent = $TempSheet.UsedRange.Copy() $Pasted = $ExcelSheet.Range("A1").PasteSpecial()
Now there are a few things you might have noticed:
- The [Microsoft.Office.Interop.Excel.XlFileFormat]::xlWorkbookDefault directive is defining the output file format, so if you want to generate different files, you could change it to fit your needs. The full list of possible output formats in the XlFileFormat enumeration article.
- The ($TempDocument.Sheets)[1] definition is not as any other array references as we do not start from zero, but from one.
- Last, but not least the $TempSheet.UsedRange.Copy() call is the one that copies the content to the clipboard, and this is where you might wish to not use your computer, because you might end-up losing some information.
Act 4
The rest should be pretty straightforward, as the property bag of an excel file is the same as a Word documents. Well... It is. In a way. But not when you are doing the simple TXT to XLSX conversion. For some reason Excel decides that in this case the Creator property should be empty. Even more... Not just empty, but not in the core.xml file at all.
<?xml version="1.0" encoding="UTF-8" standalone="true"?> <cp:coreProperties xmlns: <cp:lastModifiedBy>Zsolt Illes</cp:lastModifiedBy> >
Of course we could set all these properties in the Excel COM Object directly, but as we discussed it a few times earlier, this would have some serious performance impact. This means that we have to create it from PowerShell. It is a pretty straight-forward operation:
$CreatorNameSpace = '' $CreatorElement = $CoreXML.CreateElement('dc','creator',$CreatorNameSpace) $null = $CoreXML.DocumentElement.AppendChild($CreatorElement)
For reasons unknown to me PowerShell 5 creates the new element like this:
<dc:creator xmlns:
Instead of this:
<dc:creator />
Purely from XML perspective there is little difference between the two, as the first one means the same, just a bit more noisy. Excel however does not like the first option, and if we leave it in the core.xml like this, the Excel file becomes invalid. To work around that we have to clear that namespace reference from the Element. The only way I found for this is to rip it off on a string level:
$CreatorNameSpaceToNull = ' xmlns:dc="' + $CreatorNameSpace + '"' $CoreXML = $CoreXML.OuterXml.Replace($CreatorNameSpaceToNull,'')
If you know a more elegant solution, please share it in the comment section.
[Edit: 2017.03.14]
I got the solution for the above from my colleague Roman Lutz.
[System.Xml.XmlNamespaceManager]$NameSpaceManager = New-Object System.Xml.XmlNamespaceManager $CoreXML.NameTable $NameSpaceManager.AddNamespace('dc', '') $DCNameSpace = $NameSpaceManager.LookupNamespace('dc') $NewElement = $CoreXML.CreateElement('dc', 'creator', $DCNameSpace)
Falling Action
Act 5
We have now everything to put the solution together.
- Dictionary for the Row- and Column headers,
- Random generator for the numbers,
- A list for users and dates to update the document properties,
- Way to push all this information into an Excel file,
- I’ve not detailed it in this article, but my solution also contains a routine to use a pre-defined Excel last episode of the series where I detail the challenges with creating PowerPoint presentations. | https://blogs.technet.microsoft.com/zsoltilles/2017/03/12/generate-random-content-for-sharepoint-3/ | CC-MAIN-2018-43 | refinedweb | 1,170 | 55.34 |
Dick Nagtegaal
- Total activity 75
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 23
Dick Nagtegaal created a post,
How to navigate to implementation of generic interface for specific type?Let's say we have an interface ICommandHandler<T>.There are several implementations of the interface for different T's (dozens in our code) like:public class AddStockRequestLineItemCommandHandler :...
Dick Nagtegaal created a post,
JavaScript Intellisense not showing jQueryUI Widgets?When, in a MVC4 project, I add a .js-file to my Scripts folder, R# seems to be showing IntelliSense like it's supposed to. For example, if I type/// <reference path="jquery-1.8.2.intellisense.js" /...
Dick Nagtegaal created a post,
Do CamelHumps work when selecting a method to override?When I try to override a method in a class, a list of overridable methods appears. I'd expect to be able to use CamelHumps to select an item in the list, but that doesn't work. Is ReSharper suppose...
Dick Nagtegaal created a post,
Parameter 'x' is only used for precondition check(s)I've already found some threads about this message, and the answer usually is to decorate the method with [AssertionMethod] or decorate the parameter with [UsedImplicitly].However, I don't think I ...
Dick Nagtegaal commented, Dick Nagtegaal commented, Dick Nagtegaal created a post,
Should ReSharper be able to provide IntelliSense on LINQ to Entities?I'm not 100% sure if ReSharper should be able to do this, but I suppose it should (as VS itself does).I'm using LINQ to Entities. I've created a new Web Application project, added a new ADO.NET Ent... | https://resharper-support.jetbrains.com/hc/en-us/profiles/2102845249-Dick-Nagtegaal | CC-MAIN-2019-39 | refinedweb | 277 | 56.66 |
LyondellBasell is planning around $3bn in capital investment (capex) to build two major projects in the US – its new Hyperzone high density polyethylene (HDPE) plant and a new propylene oxide/tertiary butyl alcohol (PO/TBA) facility.
The company is ready to “break ground in one month” on its 500,000 tonne/year HDPE expansion in La Porte, Texas, said Thomas Aebischer, CFO of LyondellBasell, at the company’s investor day.
The HDPE project, expected to start up in mid-2019, is expected to cost $700m-750m and generate earnings before interest, tax, depreciation and amortisation (EBITDA) of $150m-200m/year based on average 2016 margins.
LyondellBasell, as a global petrochemical player, appears to be in prime position to increase PE exports from the US.
“There will be a lot more global trade in PE, and we are well positioned to export from the US and market product throughout the world,” said Bob Patel, chairman and CEO of LyondellBasell at the company’s investor day.
“Our marketing plans are becoming more global – a customer in Shanghai is as important as one in Chicago.”
LyondellBasell has the global asset base and employees to be able to serve markets around the world, he noted.
The company has around 6,000 employees in the US, about 6,000 in Europe and 1,000 in Asia and the Middle East, Patel pointed out.
Also, LyondellBasell will make a final investment decision (FID) on its PO/TBA project in Houston, Texas by Q3 2017. The plant, which will have capacity of 450,000 tonnes/year of PO and 900,000 tonnes/year of TBA, is targeted for start-up in 2021, a slight delay from the previous timeline of late 2020.
“We rushed a few projects earlier – we are learning and maturing. We want to make sure we have the up-front scope right,” said Patel in response to an analyst who noted the slip in the timeline for the PO/TBA project.
“PO/TBA is the low-cost technology to produce PO. If it’s not us [expanding capacity], then who? But it’s not just for [market] share – we want to earn a good return,” said Patel.
“We have many other projects in the queue. By Q3, it’s time. Let’s decide, and if not, let’s move on to the next one,” he added.
The PO/TBA project would cost $2.0bn-2.5bn, and generate $300m-400m in annual EBITDA based on average 2016 margins, Aebischer noted.
PROJECTS UNDER STUDY
Other projects under consideration include expansions of 250,000 tonnes/year of ethylene, 136,000 tonnes/year of PE and 159,000 tonnes/year of polypropylene (PP).
These projects would cost $405-440m and potentially generate annual EBITDA of $200-290m based on average 2016 margins – an extremely attractive proposition just based on the numbers.
These projects had been studied previously and “will be coming back to the forefront in the near future,” said Paul Augustowski, senior vice president of Olefins & Polyolefins – Americas. “I see a tremendous amount of opportunity in the propylene chain,” he added.
LyondellBasell is also studying new projects involving around 500,000 tonnes/year of propylene, a 500,000 tonne/year PP unit, and a 500,000 tonne PE unit. However, no cost figures or potential EBITDA returns were included with these projects.
“It’s a very exciting time for us. These projects take awhile for us to develop, and we think the timing is going to be just perfect for us to bring some of these forward in full scale,” said Augustowski.
On additional PE and PP plants, these projects “could come into focus in the latter part of this year or next year”, noted Patel, with decisions in the second half of 2018. Start-ups could occur in 2021 and 2022, noted Augustowski.
M&A APPROACH
Another potential use of capital for LyondellBasell is in mergers and acquisitions (M&A). The company will exercise discipline in evaluating and executing M&A, the executives said.
“Discipline is important” – if it does not meet the company’s high hurdle metrics, “we won’t do it”, said Patel.
In describing the company’s disciplined and rigorous process for M&A, Aebischer said the first priority is to maintain LyondellBasell’s dividend, and then its investment-grade credit rating.
From an M&A standpoint, the company will look for “opportunities to strengthen our growth and margin profile, and improve earnings stability”, said Aebischer.
Any deal must be accretive to earnings per share within two years and have an internal rate of return (IRR) of at least 12%, he added.
LyondellBasell could focus on businesses that add stability to its earnings.
Patel mentioned that propylene oxide (PO) underpins stability in its Intermediates & Derivatives segment, and that it has been modestly growing its compounding business every year.
“There are opportunities without going into pure specialties,” said Patel. | https://www.icis.com/resources/news/2017/04/11/10096852/lyondellbasell-targets-3bn-in-us-projects-and-more-on-the-table/ | CC-MAIN-2017-39 | refinedweb | 816 | 60.24 |
On Sun, Nov 05, 2006 at 12:53:23AM +0100, Christoph Hellwig wrote:
>..
>
> The dev_to_node wrapper is not enough as we can't assign to (-1) for
> the non-NUMA case. So I added a second macro, set_dev_node for that.
>
> The patch below compiles and works on numa and non-NUMA platforms.
>
>
Hi Christoph,
dev_to_node does not work as expected on x86_64 (and i386). This is because
node value returned by pcibus_to_node is initialized after a struct device
is created with current x86_64 code.
We need the node value initialized before the call to pci_scan_bus_parented,
as the generic devices are allocated and initialized
off pci_scan_child_bus, which gets called from pci_scan_bus_parented
The following patch does that using "pci_sysdata" introduced by the PCI
domain patches in -mm.
Signed-off-by: Alok N Kataria <alok.kataria@calsoftinc.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Index: linux-2.6.19-rc4mm2/arch/i386/pci/acpi.c
===================================================================
--- linux-2.6.19-rc4mm2.orig/arch/i386/pci/acpi.c 2006-11-06 11:03:50.000000000 -0800
+++ linux-2.6.19-rc4mm2/arch/i386/pci/acpi.c 2006-11-06 22:04:14.000000000 -0800
@@ -9,6 +9,7 @@ struct pci_bus * __devinit pci_acpi_scan
{
struct pci_bus *bus;
struct pci_sysdata *sd;
+ int pxm;
/* Allocate per-root-bus (not per bus) arch-specific data.
* TODO: leak; this memory is never freed.
@@ -30,15 +31,21 @@ struct pci_bus * __devinit pci_acpi_scan
}
#endif /* CONFIG_PCI_DOMAINS */
+ sd->node = -1;
+
+ pxm = acpi_get_pxm(device->handle);
+#ifdef CONFIG_ACPI_NUMA
+ if (pxm >= 0)
+ sd->node = pxm_to_node(pxm);
+#endif
+
bus = pci_scan_bus_parented(NULL, busnum, &pci_root_ops, sd);
if (!bus)
kfree(sd);
#ifdef CONFIG_ACPI_NUMA
if (bus != NULL) {
- int pxm = acpi_get_pxm(device->handle);
if (pxm >= 0) {
- sd->node = pxm_to_node(pxm);
printk("bus %d -> pxm %d -> node %d\n",
busnum, pxm, sd->node);
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
Please read the FAQ at | http://lkml.org/lkml/2006/11/7/10 | crawl-002 | refinedweb | 329 | 58.89 |
Revision as of 03:34, 22 November 2010
Fedora uses the freenode IRC network for it's IRC communications. If you want to make a new Fedora Related IRC Channel, please follow the following guidelines.
Contents
Contact Information
Owner: Fedora Infrastructure Team
Contact: #fedora-admin
Location: freenode
Servers: none
Purpose: Provides a channel for Fedora contributors to use.
Is a new channel needed?
First you should see if one of the existing Fedora channels will meet your needs. Adding a new channel can give you a less noisy place to focus on something, but at the cost of less people being involved. If you topic/area is development related, perhaps the main #fedora-devel channel will meet your needs?
Adding new channel
- Make sure the channel is in the #fedora-* namespace. This allows the Fedora Group Coordinator to make changes to it if needed.
- Found the channel. You do this by /join #channelname, then /msg chanserv register #channelname
- Setup GUARD mode. This allows ChanServ to be in the channel for easier management: /msg chanserv set #channel GUARD on
- Add Some other Operators/Managers to the access list. This would allow them to manage the channel if you are asleep or absent.
/msg chanserv access #channel add NICK manager
or
/msg chanserv access #channel add NICK op
You may want to consider adding some or all of the folks in #fedora-ops who manage other channels to help you with yours. You can see this list with /msg chanserv access #fedora-ops list
- Set default modes. /msg chanserv set mlock #channel +Ccnt (The t for topic lock is optional if your channel would like to have people change the topic often).
- If your channel is of general interest, add it to the main communicate page of IRC Channels, and possibly announce it to your target audience.
- You may want to request zodbot join your channel if you need it's functions. You can request that in #fedora-admin.
Recovering/fixing an existing channel
- If there is an existing channel in the #fedora-* namespace that has a missing founder/operator, please contact the Fedora Group Coordinator: User:Spot and request it be reassigned. Follow the above procedure on the channel once done so it's setup and has enough operators/managers to not need reassiging again. | https://fedoraproject.org/w/index.php?title=New_Freenode_IRC_Channel_Infrastructure_SOP&diff=208495 | CC-MAIN-2019-04 | refinedweb | 384 | 53.61 |
Searched your forums a lot, found a solution to my task, but still I do not understand why do I get the wrong answer everytime. It's always a 'correct-1' result. It doesn't depend on what the number in the string is.
#include <cstdlib> #include <iostream> #include <string.h> using namespace std; int main(int argc, char *argv[]) { string s1; int i, x, y; char temp; s1="1020401"; for(i=0;i<s1.length();i++) { temp=s1[i]; x=x+atoi(&temp); } cout<<endl<<x<<endl; system("PAUSE"); return EXIT_SUCCESS; }
this gives me 7, but the correct result must be 8, what did I miss?
p.s. maybe something is written not good and some of you may show me my mistakes. | https://www.daniweb.com/programming/software-development/threads/193854/sum-of-digits-in-string | CC-MAIN-2021-17 | refinedweb | 124 | 73.58 |
A model of Region Predicate that checks if a value of type
Key is contained within the open boundaries defined by
lower and
upper.
More...
#include <spatial_open_region.hpp>
Inherits Compare.
A model of Region Predicate that checks if a value of type
Key is contained within the open boundaries defined by
lower and
upper.
To be very specific, given a dimension
we define that
is contained in the open boundaries
if:
Simply stated, open_bounds used in a region_iterator will match all keys that are strictly within the region defined by
lower and
upper.
Definition at line 41 of file spatial_open_region.hpp.
The default constructor leaves everything un-initialized.
Definition at line 48 of file spatial_open_region.hpp.
Set the lower and upper boundary for the orthogonal range search in the region.
The constructor does not check that elements of lower are lesser than elements of
upper along any dimension. Therefore you must be careful of the order in which these values are inserted.
Definition at line 64 of file spatial_open_region.hpp.
The operator that returns wheather the point is in region or not.
keyis lesser or equal to
_lower; spatial::above to indicate that
keyis great or equal to
_upper; spatial::matching to indicate that
keyis strictly within
_lowerand _upper.
Definition at line 82 of file spatial_open_region.hpp.
The lower bound for the orthogonal region iterator.
Definition at line 95 of file spatial_open_region.hpp.
The upper bound for the orthogonal region iterator.
Definition at line 100 of file spatial_open_region.hpp. | http://spatial.sourceforge.net/classspatial_1_1open__bounds.html | CC-MAIN-2017-13 | refinedweb | 249 | 57.98 |
Buy
Infoshop gear and more at our online store.
Donate to Infoshop and Practical Anarchy magazine.
June 19, 1999
On June 18, 1999, simultaneous with the G8 meeting in Koln, Germany, people
all over the world participated in actions and events under the banner
"Reclaim The Streets." Email reports coming in today indicate that 10,000
people gathered in Nigeria and that San Francisco drew crowds of around
500. More news and reports of events will surely be posted in the coming
days. What follows is a contribution to this emerging body of material.
Reclaim the Streets European Headquarters
Below are two separate and very different reports. The first describes
the results of the virtual sit-in called by the Electronic Disturbance
Theater opposing the Mexican government that involved thousands of people
from 46 countries. The second is a longer narrative account describing
events as they unfolded in Austin, Texas, an action that involved about
50 people and resulted in three arrests. It ends with some comments on
hybridity, meshing the virtual and the real.
On June 15, the Electronic Disturbance Theater began sending out email announcements
urging people to join in an act of Electronic Civil Disobedience to stop
the war in Mexico. The call made in conjunction with the Reclaim The Streets
day of action was intended to introduce a virtual component to the numerous
off-line actions happening all over the world. But a strong motivation
for the action was also due to the fact that in recent weeks there has
been a significantly higher level of government and military harassment
of Zapatista communities in Chiapas, with reports indicating as many as
5,000 Zapatistas have fled their communities.
The suggested action was for people using computers to point their Internet
browser to a specific URL during the hours of 4:00 and 10:00 p.m. GMT.
By directing Internet browsers toward the Zapatista FloodNet URL, during
this time period, people joined a virtual sit-in. What this meant was
that their individual computer began sending re-load commands over and
over again for the duration of the time they were connected to FloodNet.
In a similar way that people were out in the streets, clogging up the
streets, the repeated re-load command of the individual user - multiplied
by the thousand engaged - clogged the Internet pathways leading to the
targeted web site. In this case on June 18, FloodNet was directing these
multiple re-load browser commands to the Mexican Embassy in the UK. ()
The results of the June 18 Electronic Disturbance Theater virtual sit-in
were that the Zapatista FloodNet URL received a total of 18,615 unique
requests from people's computers in 46 different countries. Of that total,
5,373 hits on the FloodNet URL - 28.8 percent - came from people using
commercial servers in the United States - the .com addresses. People using
computers in the United Kingdom accounted for the second largest number
of participants, 3,633 or 19.5 percent. People with university accounts
in the U.S., 1,677 of them, made up the third largest category of participants
at 9.0 percent. Interestingly, the fourth largest category of participants
came from .mil addresses, from the U.S. military, for which there were
1,377 hits on the FloodNet URL, at 7.4 percent. Included among the military
visitors were people using computers at DISA, the Defense Information
Systems Agency. [In the same way that police help to block the streets
when they show up at a demonstration, the military and government computer
visitors to the FloodNet URL inadvertently join the action.] And the fifth
largest group of participants were from Switzerland with 1,276 or 6.8=
percent.
The remaining 5,329, or 28.6 percent, of global participants in the June
18 virtual sit-in came from all continents including 21 countries in Europe
(Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France,
Germany, Greece, Hungary, Ireland, Italy, Lithuania, Macedonia, Netherlands,
Norway, Poland, Portugal, Spain, Sweden and Yugoslavia), 7 countries in
Latin American (Argentina, Brazil, Chile, Colombia, Mexico, Peru and Uruguay),
6 countries in Asia (Indonesia, Japan, Malaysia, Singapore, South Korea
and Taiwan), 5 in the Middle East (Bahrain, Israel, Qatar, Saudi Arabia
and Turkey), Australia and New Zealand, Canada, Georgia (former Soviet
Union), and South Africa.
The global Zapatista FloodNet action on June 18 is the first that the
Electronic Disturbance Theater called for in 1999. The group began in
the spring of 1998 and launched a series of FloodNet actions directed
primarily against web sites of the Mexican government, but action targets
also included the White House, the Frankfurt Stock Exchange, the Pentagon.
The highlight was in September when the group showcased FloodNet at the
Ars Electronica festival on Information Warfare in Linz, Austria. At that
time one of the targets of FloodNet was a U.S. Department of Defense web
site. This action is noteworthy because a Pentagon countermeasure at the
time may be one of the first known instances in which the DOD has engaged
in an offensive act of information warfare against a domestic U.S. target
- an act some say could have been illegal.
More details on the Electronic Disturbance Theater can be found at:
I turned off my computer, moved away from the screen, and left work at 5:00.
My girlfriend picked me up in the car and we passed by the bank so I could
cash my paycheck. Good thing too. My balance had literally been 99 cents.
Then we drove to the radio station, KOOP, where we do a half-hour news
program every Friday.
It was hot inside the station, as it was outside. But the studio was
nice and cool, so we sat there and waited for the Working Stiff show to
end and the news to begin. We listened to John do a phone interview with
someone from the pipe-fitters union. They were talking about a strike.
We started off the news with a long piece from A-Infos about the World
Trade Organization. It was a decent article but a bit too long to read
on the air. The piece ended with a call for people to travel to Seattle
later in the year to oppose the third WTO ministerial conference.
After the news we walked over to join a handful of IWW folks who put
out the Working Stiff Journal. They were at Lovejoys, a bar with a decent
selection of beer just off 6th Street.
I started talking to a few friends about the war in Yugoslavia and an
idea I'd had that it might good to form a focus group on the history,
present, and future of war. The idea being that the left doesn't really
understand war anymore, or rather, that the left is using the same techniques
to oppose war that it used 30 years ago, but that the way wars are fought
has changed. The few who I talked to supported the idea and had some good
suggestions.
After swilling down a few pints, at around 7:30, my girlfriend and I left
Lovejoys and drove over to Ruta Maya. All I knew was that the Critical
Mass bike ride was to end up there. And the ride was Austin's effort to
be part of the global Reclaim The Street actions that were happening all
over the world.
Ruta Maya is a coffee shop in downtown Austin's warehouse district. They
import coffee from Chiapas. Local activist groups often stage benefits
and events there.
When we got to Ruta Maya people from the bike ride were already filtering
in. They had started the ride up by the university. I wasn't on the ride
so I only heard snapshots of what had happened. But I learned that a few
had spent the previous night working on some stickers that said, "Closed"
and "Out of Order." These were to put on ATM machines and other relevant
symbols of capital. The ride passed by the Gap. For a moment Gap workers
were harassed for selling clothes manufactured in sweatshops.
The crowd inside and outside on the elevated sidewalk was a mix of Ruta
Maya regulars, people who came to hear an acoustic guitarist playing inside,
customers of Ruta Maya's cigar shop, anyone who happened to be walking
by, and of course the cyclists from the Critical Mass/RTS ride.
First I talked to some people involved in Free Radio Austin, a local
micropower radio station shut down by the FCC a few weeks ago - which
is incidentally scheduled to go back on the air today. We didn't talk
about that, but about some of the problems with a new space here called
Pueblos Unidos. A long story, but basically there is a power struggle
among the original tenets of this allegedly collective warehouse space
on the eastside of Austin. Too complicated to go into here. Conversations
about Pueblos Unidos, the Grassroots News Network, and Point A threaded
through the evening.
The riders included people I've know from Earth First!, from the local
bicycle activist scene, and a whole new set of folks from Point A who
I don't really know. I just thought that Ruta Maya was a gathering point
after the ride was finished. But it turned out to be something else.
After not long, some people started talking about how to encourage others
to start standing out in the street in front of Ruta Maya. People had
just finished the ride and were all charged up with energy. A moment later,
two young riders were moving a construction barricade and a few orange
cones into the lane of traffic coming from the west. While at the other
end of the block a group took similar barricades and placed them to stop
traffic coming from the east.
And then, one at a time, people started leaving the sidewalk or leaving
the edges of the street to stand out in the middle. For a little while
there were just about 10 people. A few standing near the barricade. A
few more down at the other end of the street. And more starting to filter
out right in front of Ruta Maya. I actually hadn't anticipated this. I
wanted to sit down so I asked someone to pass me down a chair from the
elevated sidewalk.
I sat on the chair in the middle of one lane. Someone else picked up
another chair and sat down near me. With barricades on both ends of the
block, people sitting in chairs, cars lurching forward slowly and trying
to get out, others in Ruta Maya started to take notice, and those less
inclined to be the first ones to venture out into the street, followed.
A Ruta Maya worker came out and said that needed his chair back. I didn't
argue. Ruta Maya is a cool place. And by sitting there momentarily it
had served to encourage a few more to join.
Soon there were people in both lanes of traffic out in front of Ruta
Maya. At its peak maybe there were as many as 50. Not a huge crowd. Enough
to reclaim the street - temporarily. But not enough to remain once the
police started to arrive. And of course they did.
But before the police showed up, a few of the people whose idea it was
to reclaim this particular section of street spoke loudly and explained
what Reclaim The Streets was all about. Small flyers titled "Whose City
Is This Anyway?" were passed out. And people started doing a "cheer" of
sorts. Lacking were drums or other instruments that are always good for
stirring up a crowd.
I first noticed a brown shirted Sheriff's deputy get out of a sports utility
vehicle. But he simply walked by, seemingly oblivious to what was happening.
Soon thereafter the bike cops showed up. Like a number of urban police
forces in the U.S., Austin has its police-on-bicycle contingent, mostly
used for patrolling the busy downtown area.
The bike cops started to move around the crowd and address people whom
they thought might be leaders. I was actually standing with my back turned,
talking to a friend, when one bike cop came up to us. Maybe because I
was smoking a cigar he thought I was a 'revolutionary leader'. (Just kidding.)
Anyway, the bike cop said to us, "I'm contacting my supervisor and if
you aren't out of the street in ten minutes, we are going to start making
arrests."
I told the bike cop that I wasn't in charge. But anyway, my friend and
I passed on this warning to a few others. So when the three police vans
and the handful of marked and unmarked cars showed up - to inadvertently
block the streets themselves - we were not surprised.
The three vans barreled down the road from the east and the marked and
unmarked cars from the west, stopping right at the intersection of 4th
and Lavaca. Obviously, given that there were not many of us and given
that we had neither anticipated nor were we prepared to take a stand,
we mostly filtered back off the street and onto the side.
But there were a few who - for whatever reason - were not so content
to give up the street that quickly. Bike cops and regular police officers
stood in the street in between the three vans and the rest of us on the
side of the road. People were jeering at the cops. I didn't see exactly
what happened - or what precipitated it - but in a flash a group of cops
lunged forward and pulled someone from out of the crowd on the side, not
even someone who was standing closer to the police, but someone behind
another. And then another was arrested. And then a third.
People were yelling and screaming and the cops: "You fucking pigs!";
"Don't you have any real criminals to arrest"; "Whose street? Our street!"
They remained for awhile longer. Tensions quieted down. And the vans and
the marked and unmarked cars drove off.
All through this, my girlfriend had been trying to call a few local media
outlets. She was at the payphone in front of Ruta Maya. At one point she
told me she had got through to KXAN. But no media ever showed up.
With the police gone, three of us on the way to jail, a number of the
riders - who had only wanted to ride their bikes and not get involved
with this mess - on their way out, the ones who had planned this Austin
Reclaim The Street action bewilderedly consulted about how next to proceed.
My girlfriend and I had both been arrested before and were quite familiar
with the process. She knew the inside of Austin's jail and something about
the procedure for getting out. She offered her advice to the younger activists
and was ready to leave them to it. But I suggested maybe we ought to also
go down to the police station to help sort things out. So we did.
By the time we parked the car and got inside the police station, there was
already a crowd of perhaps 20 people, mostly sitting on the floor, inside
the area where you ask about new arrestees. It looked like we were now
reclaiming the police station, rather than the street!=20
We weren't sure if the two young women and one young man were taken to
this station. And there was speculation that they could have taken them
to any number of substations throughout the city, as they are sometimes
apt to do.
None of the people whose idea it was to reclaim the section the street
in front of Ruta Maya were prepared for arrests, and in Austin there aren't
really known activist lawyers - like in some U.S. cities - readily available
to help in moments like this. Although a few of the people who ended up
being in the Austin RTS action were seasoned activists, most seemed to
be people who had never actually had to deal with police arrests before.
Or if they had, they certainly hadn't made any arrangements in advance.
So everything was handled on the spot.
My girlfriend has a friend who is a lawyer who has helped her out in
the past. While she was on the phone to her, others were over at the main
desk waiting to hear if in fact the three were at this station and what
they were being held for. Finally, at some point between 9:30 and 10:00
we learned that yes in fact the three had been brought to this station,
and what the charges were.
One was charged with a Class C misdemeanor for refusing to obey the order
of a police officer. Another was charged with a Class C misdemeanor for
disorderly conduct. But the third was charged with a Class B misdemeanor,
a more severe level, for "inciting a riot."
First of all, there was no riot, by any stretch of the imagination. But
more importantly, the young woman charged with inciting a riot - as I
later learned - had merely begun to yell out a cheer. She had said, "Give
me a 'P'," - and was probably going to spell "PIG" - at which point the
cops lurched forward to grab her from out of the crowd.
My girlfriend's friend who is a lawyer advised us that it would be best
if a boisterous crowd did not linger in the police station waiting area
as it might only antagonize them and encourage them to hold the three
longer. So a group drifted off and went to Lovejoys - the bar where we
had started the evening off earlier.
My girlfriend and I, and a couple of friends of the people being detained,
remained at the police station. We learned that the two with Class C misdemeanors
would be able to be released for $200 bond, although it wouldn't be until
much later in the night, actually the wee hours of the morning, but that
the young woman charged with inciting a riot would have to wait until
a judge came at 10:30 in the morning.
When we saw that it was senseless to wait at the police station any longer,
the rest of us left as well, joining others back at Lovejoys where we
drank from pitchers of beer, mulled over what had just transpired, and
continued an earlier thread about some of the internal dynamic of the
new warehouse space in Austin called Pueblos Unidos.
In the middle of the night the two with Class C misdemeanors were bailed out.
And at 10:30 or so on June 19, my girlfriend's lawyer friend - a bit begrudgingly
- had to go down to the station to deal with the magistrate and help the
one with the inciting riot charge get released. My girlfriend went back
to the police station in the morning as well - in part to console her
lawyer friend who had had to be bothered on a Friday evening she was spending
with her husband who works out of town all during the week. She was able
to help get the one with the inciting riot charge out of jail, by being
able to visit her while in custody and explain the procedure for getting
a personal release - but did not agree to be the lawyer for these cases.
Compounding factors were that two of the people arrested, including the
one with the inciting a riot charge, had just returned to the country
- literally on the afternoon of June 18 - after having been in Guatemala
and Mexico.
Now, a criminal lawyer will need to be found. People will have to spend
precious and limited resources on the entire legal process. Those who
must return to court will have added stress and worry. And what started
out as evening or revelry ends up in the onerous world of the courts.
Several things are clear. While a degree of planning for this action was undertaken
- in that minimally a date, time, and place were chosen and the action
was given some form and content - there definitely were important elements
in the planning process that were overlooked. The first, obviously being
that it should have been known by the people whose intent it was to reclaim
the street to realize that this sort of activity generally falls outside
the boundary of the law, that the police were likely to show up, and that
arrests were possible. And that given the possibility of arrest, contingency
plans should have been made: i.e. there should have been a lawyer on stand
by and even some sort of legal observer.
The second oversight was that there was no attention given to drawing
in media, nor were any of the participants using any audio or video recording
devices. No photographs nor any videotape of the above arrests were made
to supply concrete evidence demonstrating that in fact the Class B misdemeanor
inciting to riot charge is ludicrous. And finally it seems that the nature
and purpose of the action was not made clearly manifest to passersby or
to unconnected people sitting inside or outside of Ruta Maya.
All of these things - legal preparation, media work, and public relations
- are aspects of street actions that are fairly important. And there are
clearly people in Austin who have strong skills in all of these areas
and whose services could have been called upon. I'm not sure, but I think
the Austin RTS action was a last minute one, pulled off by just a few
people who didn't have time to do everything needed.
I don't want to sound too critical. During the moment - albeit a short
one - there was a temporary autonmous zone. People did in fact reclaim
a portion of a street. But the cost of doing this is that several people
now unwittingly must face the hassle and expense of the court system.
One year ago I wrote a few short pieces with the theme of hybridity, talking
about the goal of developing actions that combined on-line (virtual) and
off-line (real) elements. In part this was a reaction to criticism the
Electronic Disturbance Theater received which claimed that by acting purely
in the virtual realm we were isolating ourselves from people who focused
more or all of their attention on doing things in the street or in the
flesh. We tried to introduce this idea of Electronic Civil Disobedience
to the community of activists who every year, for the past few anyway,
have gone to the School of the Americas to participate in the more traditional
civil disobedience style of action. And at a national conference on civil
disobedience held in Washington, DC, this past January, two from the EDT
were part of a panel discussion on Electronic Civil Disobedience. Even
so, this notion of joint computer-based and street-based actions has a
long way to go. There is still a disjuncture, a gap, between what's happening
now on the Net and what people are doing on the street. Many people engaged
in yesterday's street action in Austin, for example, probably had no idea
that the virtual component was even taking place.
EDT's participation in the global RTS actions is another step in developing
both the theory and practice of this sort of joint engagement. The Internet
is inherently global and so Internet-based actions seem to be a logical
match with global street actions. But this is not to say that the particular
example of FloodNet is the most ideal way of meshing the street and Net
together. The FloodNet action is something that individuals may join from
their computers at home, work, or in an educational environment. Even
though acting simultaneously, jointly, the participants in the on-line
and off-line actions in this case may have been completely different sets
of people. What can be done differently?
Some examples from Amsterdam and London over the course of the last few
years are instructive. During demonstrations against a meeting of the
EU in Amsterdam - which involved massive police presence in the streets
- people created web pages in which they mapped out the location of the
police. The pages were constantly updated with relevant information to
demonstrators from people sending in email messages or calling in from
pay phones or cell phones. In another example, in London during an occupation/takeover
of a Shell office, activists used a portable laptop connected to a cell
phone to send out announcements to the media and others once they were
inside. They were also able to directly update a web site during the occupation.
Austin's Reclaim The Street action was about as low tech as you can go.
The most sophisticated technology were probably the bicycles used for
the first part of the action. Clearly there was no digital technology.
No interface with the Net. The closest to this was probably when my girlfriend
used the payphone right in front of Ruta Maya to unsuccessfully call media
as the police were making arrests. For a moment she tapped in to the telephone
infrastructure - which is basically what the Internet is.
What would have happened or what could happen in the future if we are
able to enhance these sorts of street actions with a real-time audio and
video presence? Imagine if on the elevated sidewalk in front of Ruta Maya
and out on the street several people had had video cameras and they were
taping the entire action. Further imagine that there were cables running
from the cameras to the interior of the café where people were
sitting with laptop computers capable of handling video input and these
laptops were connected to a phone line in the café - a live stream
of audio and video being netcast about the RTS action to a global audience.
Video recording and netcasting the street action may not have prevented
people from being arrested, but it certainly would have captured a public
record and people other than the participants and the observers at Ruta
Maya would have known about it. As it stands there is no recorded imagery
or audio of the Austin RTS action. Nor have there been any reports about
it in the local media. Nor does anyone on the Net - apart from those reading
this - know about it.
One would think that in a town such as Austin - one credited as having
one of the fastest growing economies in the U.S. largely linked to the
high tech computer industry - that activists here would have the wherewithal
to develop these sorts of uses of seemingly readily available digital
technology. But there are obstacles. Some of the obstacles are ideological,
perhaps. A lingering anti-technology critique. Some of the obstacles are
economic. A genuine lack of access. Some obstacles may simply be that
the ideas are still new.
To conclude - well at least to stop, concluding may be too premature
right now - in addition to an obvious need for more attention to some
basic legal, media, and publicity training, there is a need to think about
and to experiment more with ways of bringing the street and the Net closer
together. We should address this question: how do we bring what is happening
on the street onto the Net?
The Zapatista FloodNet action in conjunction with the global Reclaim
The Street actions is an example of real-virtual hybridity at a world-wide
level. But it is only one form and it lies within the area of Internet
as site for resistance and direct action. Finally, then, it seems there
are at least two important areas where further exploration is needed:
the first, greater experimentation with other forms of on-line action
and electronic civil disobedience to be used jointly with actions on the
street; the second, greater experimentation with bringing the street and
the Net closer together so that what happens on the street is netcast
in real-time onto the Net to a global audience.
last updated: December 31, 2005
Directory |
Portals |
Features |
AnarchistFAQ |
Blogs |
Forums |
I-News |
What's New? | | http://infoshop.org/news_archive/j18_tx1.html | crawl-001 | refinedweb | 4,730 | 68.4 |
============================================Updated to v1.4. Appearance changed, new improvements and bugfixes (See History at bottom for changes).I hope you'll enjoy this hopefully Final version!
Having a few more months of C# hobby-experience by trial and error and some new updates for this NFO Viewer, I once again updated this article. I'm running Windows 7 Ultimate x64, .NET 4.5 and I use Microsoft Visual Studio 2010.
This application has a custom 'Form', Functions to close application and mute music, custom icon, picture that goes to a website if you click it and the main thing is the richTextBox. In this box I will load the NFO file (text), which will be presented in custom colors, scrolls down automatically and loops, and a last thing: you can't interact with the text. It has to be presented like the end credits of a movie!Using this article you can pick any part of code to create your own viewer or use the source to create a custom Viewer of the one provided here.
Form
richTextBox
I had a lot of NFO files, which I had to set 'open with notepad' to show them. Notepad doesn't show all correct symbols by default and doesn't position everything in the right place. Eventually I got some NFO Viewer samples that ran instantly and had a lot of custom things. I wanted that too, but couldn't change those files. So I decided to create them myself!
Since this is a new update, some buttons/functions are different. I'll leave the old options as is, so people can still use those as well. Changes since 1.3: No close button (2) anymore (press Escape instead), Test Speedbuttons removed (7), changed speed options (1x +, 1x - instead of both 2x).
Picture 'Argaï NFO Viewer v1.1'Picture 'Argaï NFO Viewer v1.3'Picture 'Argaï NFO Viewer v1.4 Final' (Full image from below)
The above application contains the following parts:
Transparency
Button
checkBox
pictureBox
Buttons
KeyPress
Icon
PictureBox
NOTE: As I've changed a lot in v1.4, not all code will be placed in this article. If you want to check some specific code, use the provided source code.Let's try to explain how to make those individual parts with the specific options I wanted. You can change whatever you want to your own liking, but I wanted to create it this way and it was hard to find around the internet. At first, my application has imported all that's stated below:
using System; // Default, Used for handles.
using System.Diagnostics; // To use Process.
using System.Drawing; // Default, for point locations and colors.
using System.Windows.Forms; // Default, Formrelated code (events, properties, objects).
using IrrKlang; // I added this myself (music player).
using System.Reflection; // To work with Assemblies.
using System.Runtime.InteropServices; // This is for the dll-import.
using System.IO; // Work with files and streams.
using Un4seen.Bass; // This is for the new BASS ModPlayer.
Form properties:
Backgroundimage
BackgroundImageLayout
DoubleBuffered
Size
StartPosition
Text
TransparencyKey
We want to be able to move around our program. To do this, create 2 new Events for the form and name them wisely, or double click the field and use the default names. In my application I always use the default ones.In the Properties window, select the lightning icon (Events).
MouseDown
Form1_MouseDown
MouseMove
Form1_MouseMove
This will create two new methods to do something when these events happen on your picture (form). Add the following code to the methods in your form:
// As I use this in more methods, I placed it before public Form1()
public Point mouse_offset;
// These 2 methods will make it possible to drag your application around.
private void Form1_MouseDown(object sender,
System.Windows.Forms.MouseEventArgs e)
{
mouse_offset = new Point(-e.X, -e.Y);
}
private void Form1_MouseMove(object sender,
System.Windows.Forms.MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
Point mousePos = Control.MousePosition;
mousePos.Offset(mouse_offset.X, mouse_offset.Y);
Location = mousePos;
}
}
There was a 'bug', if users pressed Arrow Keys, they would enter the richTextBox (focus changes) and using the Arrow Keys and PageUp/Pagedown Keys they could navigate in it. I've searched and found working code to disable this. I also added the Escape key function here, as Escape is a Command key. Now you can Close the application by pressing Escape when the application is active.In the Form_Load( ) Event, add:
Form_Load( )
foreach (Control control in this.Controls)
{
control.PreviewKeyDown += new PreviewKeyDownEventHandler(control_PreviewKeyDown);
}
Outside of this Event, add a method with following code:
void control_PreviewKeyDown(object sender, PreviewKeyDownEventArgs e)
{ // This blocks arrow/directional keys.
if (e.KeyCode == Keys.Up || e.KeyCode == Keys.Down || e.KeyCode == Keys.Left || e.KeyCode == Keys.Right)
{ e.IsInputKey = true; }
// Here I added the function to close app with Escape.
if (e.KeyCode == Keys.Escape)
{ this.Close(); }
}
When this code is added, you won't be able to switch from the Form to the richTextBox using arrow Keys.I also added a Fade Effect of music and app when the application closes.This effect is triggered when the application gets a message that it needs to close.The following code handles the effect of the appearance, and also fades the music with it. I used timers to fade away with different speeds. You can combine it and change speed if it fits, do as you want.
// CLOSE FORM [FADE APPLICATION/MUSIC]
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
if (this.Opacity > 0.01f)
{
e.Cancel = true;
Bass.BASS_SetConfig(BASSConfig.BASS_CONFIG_GVOL_MUSIC, 7000);
AppFadeTimer.Interval = 33;
VolFadeTimer.Interval = 10;
AppFadeTimer.Enabled = true;
VolFadeTimer.Enabled = true;
}
else
{ AppFadeTimer.Enabled = false; VolFadeTimer.Enabled = false; }
}
private void AppFadeTimer_Tick(object sender, EventArgs e)
{
if (this.Opacity > 0.01)
{ this.Opacity = this.Opacity - 0.01f; }
else
{
this.Close();
}
}
private void VolFadeTimer_Tick(object sender, EventArgs e)
{ vol = vol - 26; Bass.BASS_SetConfig(BASSConfig.BASS_CONFIG_GVOL_MUSIC, vol);}
As I make different app designs and versions, this may differ a lot, but it works.
<code>
Button Properties:
Cursor
BackColor
ForeColor
Location
We want to be able to close our program with a X-Button. In Design Mode, add a Button from the Toolbox, change the Property Text to 'X' or 'Close' and size it to your liking. Add the Event Click: 'button1_Click'. This will create a method for the Event to do something when the user presses the button.
T
ext
Click
Use the following code to close the application:
// This will close the application when the user presses the X button.
private void button1_Click(object sender, System.EventArgs e)
{
this.Close();
}
checkBox Properties:
Checked
True
CheckState
We want to be able to mute the music (pause) with a checkBox. In Design Mode, add a checkBox to the form. Change the properties like the example above or to your own desire. Now we go set the checkBox' event. We simulate that people turn of our check, to disable the music. Add the Event CheckChanged: checkBox2_CheckedChange. Now you have the method for this event. Add the code to pause or stop the music. In my case, I use irrKlang modplayer BASS to play the mod music files. To see how I implemented this, see 10) Music (.it format) To pause this music in irrKlang BASS, add the following code in the checkBox2_CheckChange method:
CheckChanged
checkBox2_CheckChange
// Old version irrKlang.
private void checkBox2_CheckedChange(object sender, EventArgs e)
{
CheckState state = checkBox2.CheckState; // Get the current state of the check.
if (state == CheckState.Checked) // If checked after change;
{
engine.SetAllSoundsPaused(false); // Do nothing (if already playing) or continue music.
}
else // If changed to 'Unchecked'
{
engine.SetAllSoundsPaused(true); // Pause the music.
}
}
// New Version (Bass).
private void checkBox2_CheckedChange(object sender, EventArgs e)
{
CheckState state = checkBox2.CheckState;
if (state == CheckState.Checked)
{ Bass.BASS_Start(); }
else
{ Bass.BASS_Pause(); }
}
pictureBox Properties:
BackgroundImage
BorderStyle
We want to be able to go to a website by clicking on an image. Besides, don't pick Internet Explorer, but the default browser of the client system. In Design Mode, add a pictureBox to the form. Change the Properties like the example above or to your own desire. Now we add the Event from the Properties' Event window: Click, e.g. pictureBox1_Click.
In the method that we just created add:
private void pictureBox1_Click(object sender, EventArgs e)
{
Process myProcess = new Process();
{
myProcess.StartInfo.UseShellExecute = true; // This runs from command line (default browser).
myProcess.StartInfo.FileName = "";
myProcess.Start(); // Use any valid link you want above.
}
}
It's also nice to put ToolTips on PictureBox or other Controls, so people know what it'll do or where it goes to.Above and in the Main Form Constructor add:
System.Windows.Forms.ToolTip ToolTip1 = new System.Windows.Forms.ToolTip();
public Form1()
{
// Info ToolTips for Some Buttons.
ToolTip1.InitialDelay = 1200;
ToolTip1.SetToolTip(pictureBox1, "Go to: Dutch Argaï Project Forum!");
ToolTip1.SetToolTip(buttonQ, "Info");
}
richTextBox properties:
DetectUrls
Font
ReadOnly
ScrollBars
We want to be able to view a .nfo file correctly in the richTextBox. In Design mode, add a richTextBox to the form. Change the Properties like the example above or to your own desire. If you don't have the Load event from the Form already, create one (e.g. Form1_Load). When the Form and all components are loaded, we want to check if the .nfo file exists. When it does, convert it to the right format and present it, otherwise put the text 'NFO not found' in the richTextBox. Final code (this just shows the text in the box, no scrolling yet):
Load
private void Form1_Load(object sender, EventArgs e)
{
// You can set 'file_name' anything you like.
string file_name = "C:\\Test\\DARTY.nfo"; // Set the filepath.
if (System.IO.File.Exists(file_name) == true) // Check for existence.
{
// Encode the file and put the endresult in our richTextBox.
System.Text.Encoding encoding = System.Text.Encoding.GetEncoding(437);
System.IO.StreamReader file = new System.IO.StreamReader(file_name, encoding);
richTextBox1.Text = file.ReadToEnd();
file.Close();
}
else // If it doesn't exist/not found...
{
// Put this text in the richTextbox instead.
richTextBox1.Text = "File not Found! :( --- Where is DARTY.nfo??";
}
}
OR we can use a .nfo file as a resource, which will be better in this case.To do this, open the project properties and select the 'Resources' tab. Add an existing file (our .nfo file)and view the properties. We need to change the FileType to Text and then another option appears: Encoding. Set this to OEM United States - Codepage 437. If those options aren't available, try to change the View to 'View Details' instead of View In List or View As Thumbnails (Just like on computer).When this is set, we don't have to check existence and don't have to read and encode the file anymore.This will leave the following short code as final result:
FileType
Encoding
string file_name = Properties.Resources.Eng_Dut_01; // Resource doesn't require file extension.
richTextBox1.Text = file_name; // Put the file in the box.
The variable 'file_name' will be directed to the resource we've added and the Text of the richTextBox will be filled with the resource data. We don't need the lines from the previous part anymore to work through the file!We want to be able to disable all interaction with the richTextBox, without disabling the box itself. Disabling the richTextBox is almost all right, but it prevents us from selecting custom colors!I've created a new Form with the same size and location as the richTextBox. Opacity 1% so you can't see it and it moves around with the application as well. This prevents any interaction with our richTextBox, just as we wanted!If you want to see the code, use my source code (v1.3). In v1.4 I've removed the Shield (Form) that stayed on top of all other Windows Applications and I substituted it with blocking mouseclicks. It didn't work with the clickevents, but I've found great code samples online that worked:
// DISABLE RIGHTCLICK + MIDDLEBUTTON ON RICHTEXTBOX.
public bool PreFilterMessage(ref Message m)
{
// Filter out WM_NCRBUTTONDOWN/UP/DBLCLK
if (m.Msg == 0xA4 || m.Msg == 0xA5 || m.Msg == 0xA6) return true;
// Filter out WM_RBUTTONDOWN/UP/DBLCLK
if (m.Msg == 0x204 || m.Msg == 0x205 || m.Msg == 0x206 || m.Msg == 0x207 || m.Msg == 0x208 || m.Msg == 0x209) return true;
return false;
}
Note that this applies to the whole active Form. But as my Form only accepts left mouse clicks, I don't care blocking the other mousebuttons for it.
We want to be able to autoscroll the richTextBox and loop it, to view all the text like movie credits. Prepare, as this code isn't easy at all. You don't have an option to just automatically scroll a box at a specified speed. There is a code option however to go up line by line, but that isn't easy to read. See it this way: We need a Messageservice from a dll, to tell the richTextBox to move up a pixel each few milliseconds. So we set up the task (moving up) and have to create a timer, that sends the task as a message every few milliseconds we have set before.
I didn't get all of the code, but it worked! See REFERENCES at the bottom for the project I got this from. The only thing I changed was the speed of scrolling + options to adjust that, it goes back to start when it reaches the end (loop) and I changed the color before the text updates to a new location. This was hard as I needed to sync it with the different scrollingspeeds.
Import DLLs, create all references for the code
The following code (except first 'using' line) is placed between the public partial class and public Form1( )
public partial class
public Form1( )
// Paste this line on top of your code.
using System.Runtime.InteropServices;
// The rest.
public partial class Form1 : Form
{
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool GetScrollInfo(IntPtr hwnd, int fnBar, ref SCROLLINFO lpsi);
[DllImport("user32.dll")]
static extern int SetScrollInfo(IntPtr hwnd, int fnBar, [In] ref SCROLLINFO lpsi, bool fRedraw);
[DllImport("User32.dll", CharSet = CharSet.Auto, EntryPoint = "SendMessage")]
static extern IntPtr SendMessage(IntPtr hWnd, uint Msg, IntPtr wParam, IntPtr lParam);
struct SCROLLINFO // These are all required.
{
public uint cbSize;
public uint fMask;
public int nMin;
public int nMax;
public uint nPage;
public int nPos;
public int nTrackPos;
}
enum ScrollBarDirection
{
SB_HORZ = 0,
SB_VERT = 1,
SB_CTL = 2,
SB_BOTH = 3
}
enum ScrollInfoMask
{
}
const int WM_VSCROLL = 277;
const int SB_LINEUP = 0;
const int SB_LINEDOWN = 1;
const int SB_THUMBPOSITION = 4;
const int SB_THUMBTRACK = 5;
const int SB_TOP = 6;
const int SB_BOTTOM = 7;
const int SB_ENDSCROLL = 8;
private Timer t = new Timer(); // Create an instance of our Timer.
Set the speed and enable Timer to send messages.
Add this code in the public Form1( )
public Form1()
{
InitializeComponent();
// Extra Code. Interval creates the speed, Event sends it.
t.Interval = 40;
t.Tick += new EventHandler(t_Tick);
t.Enabled = true; // We want the Timer to run this instantly, so enable.
}
Almost there, last bit of code for the autoscroll! Create the 2 methods t_Tick( ) and scroll( ) to run the process. I've added additional code to change the text while scrolling. This is in t_Tick( ), it checks the current scrolling speed by getting the interval of the timer, which can be adjusted by pressing + and - . Depending on the interval, I execute the correct colorchangespeedcode.I added one method FastScrollpeed( ) (least code) as example, for rest see source code.Depending on the interval, the color needs to be updated each pixel up or each few pixels up (in fast case). That's why I use int i to check the pixel number and reset on end to loop the color pattern.
t_Tick( )
scroll( )
FastScrollpeed( )
// Timer for scroll + change richTextBox1 color at each interval.
int i = 1;
string ScrollEnabled = "true"; // If user presses P (pausing scroll) then this is false.
void t_Tick(object sender, EventArgs e) // GetCurrentInterval and call the right color speed change.
{
if (t.Interval == 80)
{ SlowScrollSpeed(); }
else if (t.Interval == 40)
{ CenterScrollSpeed(); }
else if (t.Interval == 10)
{ FastScrollSpeed(); }
if (ScrollEnabled == "true") { scroll(richTextBox1.Handle, 1); } else { scroll(richTextBox1.Handle, 0); }
}
void FastScrollSpeed() // Change color each 6 lines.
{
if (i >= 1 && i <= 6) { richTextBox1.ForeColor = Color.OrangeRed; i = i + 1; }
else if (i >= 7 && i <= 12) { richTextBox1.ForeColor = Color.DarkOrange; i = i + 1; }
else if (i >= 13 && i <= 18) { richTextBox1.ForeColor = Color.Yellow; i = i + 1; }
else if (i >= 19 && i <= 24) { richTextBox1.ForeColor = Color.LawnGreen; i = i + 1; }
else if (i >= 25 && i <= 30) { richTextBox1.ForeColor = Color.Turquoise; i = i + 1; }
else if (i >= 31 && i <= 36) { richTextBox1.ForeColor = Color.DodgerBlue; i = i + 1; }
else if (i >= 37 && i <= 42) { richTextBox1.ForeColor = Color.RoyalBlue; i = i + 1; }
else if (i >= 43 && i <= 48) { richTextBox1.ForeColor = Color.DarkViolet; i = i + 1; }
else if (i >= 49 && i <= 53) { richTextBox1.ForeColor = Color.HotPink; i = i + 1; }
else if (i == 54) { richTextBox1.ForeColor = Color.HotPink; i = 1; }
}
// Scrolls a textbox. handle: handle to our textbox. pixels: number of pixels to scroll.
void scroll(IntPtr handle, int pixels)
{
IntPtr ptrLparam = new IntPtr(0);
IntPtr ptrWparam;
// Get current scroller posion
SCROLLINFO si = new SCROLLINFO();
si.cbSize = (uint)Marshal.SizeOf(si);
si.fMask = (uint)ScrollInfoMask.SIF_ALL;
GetScrollInfo(handle, (int)ScrollBarDirection.SB_VERT, ref si);
// Increase posion by pixles
if (si.nPos < (si.nMax - si.nPage))
si.nPos += pixels;
else // No more pixels left...
{
ptrWparam = new IntPtr(SB_ENDSCROLL); // End reached.
t.Enabled = false; // Disable the Timer.
SendMessage(handle, WM_VSCROLL, ptrWparam, ptrLparam);
}
// Reposition scroller
SetScrollInfo(handle, (int)ScrollBarDirection.SB_VERT, ref si, true);
ptrWparam = new IntPtr(SB_THUMBTRACK + 0x10000 * si.nPos);
SendMessage(handle, WM_VSCROLL, ptrWparam, ptrLparam);
// I added this to loop the scrolling.
if (t.Enabled == false) // If Timer is disabled...
{
// Send scroller back to the top.
SendMessage(handle, WM_VSCROLL, (IntPtr)SB_TOP, IntPtr.Zero);
t.Enabled = true; // Enable timer again.
}
}
As we can't interact with the richTextBox anyways, it would be great if we could move around the application by dragging the richTextBox. I've found some code online that enabled moving the whole Form by dragging a control on it:
// These Events help to drag the Whole Form by dragging our richTextBox.
private void richTextBox1_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
dragging = true;
pointClicked = new Point(e.X, e.Y);
}
else
{ dragging = false; }
pictureBox1.Focus();
}
private void richTextBox1_MouseUp(object sender, MouseEventArgs e)
{
dragging = false;
pictureBox1.Focus();
}
private void richTextBox1_MouseMove(object sender, MouseEventArgs e)
{
if (dragging)
{
Point pointMoveTo;
pointMoveTo = this.PointToScreen(new Point(e.X, e.Y));
pointMoveTo.Offset(-pointClicked.X, -pointClicked.Y);
this.Location = pointMoveTo;
}
}
// I've used Focus() method to prevent entering the richTextbox using certain ClickEvents.
This should be it, you should have automatic scrolling and looping text that you load from a .nfo text file (also in resource), convert and view in the richTextBox, with optional color changing code. You can move the Form around by dragging this control as well.
Refer to 4) The Picturebox [Clickable] But don't add any (click)Event and set the Property Enabled to False. This will do, without any code, you can drag around any disabled objects.
Enabled
7.1) BUTTONS: Add some buttons you want, change the Text to P, - and + for the functions. For each Button: add the Event Click. Within these methods we put the actions to create this new functionality. I created these: buttonp (PAUSE), buttonmin (SLOWER) and buttonplus (FASTER).Note the t.Interval, which we've set in 'Set the speed and enable Timer to send messages'from 5) The RICHTEXTBOX. The setting that I use as default speed is t.Interval = 40;
BUTTONS
t.Interval
t.Interval = 40;
// The Interval needs to be smaller to send it's messages faster (faster speed).
private void buttonplus_Click(object sender, EventArgs e)
{
if (t.Interval > 10) // Not highest Speed.
{
t.Interval = t.Interval - 15; // Increase Speed with 15ms.
if (t.Interval <= 10) // When this reaches highest speed after calculation...
{ buttonplus.Enabled = false; } // Disable the speedbutton.
buttonmin.Enabled = true; // Enable the opposite speedbutton.
}
}
private void buttonmin_Click(object sender, EventArgs e)
{
if (t.Interval < 70)
{
t.Interval = t.Interval + 15;
if (t.Interval >= 70)
{ buttonmin.Enabled = false; }
buttonplus.Enabled = true;
}
}
private void buttonp_Click(object sender, EventArgs e)
{
if (t.Enabled == true) // First check if timer is running.
{
t.Enabled = false; // If so, disable it.
}
else // (t.Enabled == false)
{
t.Enabled = true; // Otherwise do the opposite.
}
}
//If you can count, you'll see you can push both the + & - 2x if you have the default speed[40].
7.2) KEYPRESS Event:
KEYPRESS
Determine where you want to have the ability to press your keyboard key. I decided my form was the best choice, but the focus just wan't there. I've used the following steps:
KeyPreview
TabIndex
this.Focus();
Then, time for the KeyPress Events. Go to the Form Properties and select the Event options. Add KeyPress. For this one method, we can add all the keyboard keys we want.
If you want the old code with button option, check the v1.1 source release.In v1.4 I used the following situation:
// KEYBOARD KEYS THAT PROVIDE FORM-FUNCTIONS FOR USERS.
// Note you can't use 'command' keys, just input keys that actually print to screen.
private void Form1_KeyPress(object sender, KeyPressEventArgs e)
{
if (e.KeyChar == 'm') // Toggle Music.
{
if (checkBox2.CheckState == CheckState.Checked)
{ checkBox2.CheckState = CheckState.Unchecked; }
else // Unchecked state.
{ checkBox2.CheckState = CheckState.Checked; }
}
if (e.KeyChar == 'p') // Pause Scroll.
{
if (ScrollEnabled == "true")
{ ScrollEnabled = "false";}
else { ScrollEnabled = "true"; }
}
if (e.KeyChar == '=' || e.KeyChar == '+') // Speed up Scroll, Values are fixed!
{ if (t.Interval == 40)
{ t.Interval = t.Interval - 30; }
else if (t.Interval == 80)
{ t.Interval = t.Interval - 40; }
}
if (e.KeyChar == '-') // Slow down Scroll.
{ if (t.Interval == 40) { t.Interval = t.Interval + 40; }
else if (t.Interval == 10) { t.Interval = t.Interval + 30; } }
}
Refer to the Form Properties of 1) The FORM [MAINFRAME]. In this section you can add an Icon for the application IN RUNNING STATE.
The .exe file itself still has it's default window Icon. To change this to the same/another Icon, open the Solution Explorer. Rightclick the name of your project (almost top in bold) and open the Properties. If the tab isn't set on Application, select that tab. You'll find the option 'Icon and manifest', where you can browse for the same .ico file. When you save this, your .exe file will also contain your selected icon.
A new Form that opens when you click on the corresponding Question Mark PictureBox.When you hover over the box, the Question Mark will glow. I added those 2 pictures as resource.The next block of code shows how that Question Mark works:
// QUESTION-MARK BUTTON ['About' POPUP]
void buttonQ_MouseEnter(object sender, EventArgs e)
{ this.buttonQ.BackgroundImage = ((System.Drawing.Image)(Properties.Resources.QMarkNeonGlow)); }
void buttonQ_MouseLeave(object sender, EventArgs e)
{ this.buttonQ.BackgroundImage = ((System.Drawing.Image)(Properties.Resources.QMarkNeonLessGlow)); }
private void buttonQ_Click(object sender, EventArgs e)
{
f2.TopMost = false; // DisAble SHIELD ON TOP
Form f3 = new Form3(); // CREATE 'About' POPUP
f3.ShowDialog(); // BLOCK INTERACTION WITH APPLICATION untill popup closed.
}
By using the MouseEnter and MouseLeave Events you can change the appearance of the picture when the mouse hovers over the PictureBox. I have to disable the shield's TopMost, otherwise it will prevent users from closing the popup. When users click the richTextBox, I make sure that the TopMost of the shield (f2) is activated. It's not the best solution, but I'm satisfied with it for now. To see the complete setup, the source is your friend!
MouseEnter
MouseLeave
I wanted to have small music files (at least compared to mp3), so I've chosen the mod format[.it/.xm/.s3m/.mod]. Those can be big too, but I just download tracks < 1MB, to keep my application small. There are no simple ways to code playing music, only when you want to run .mp3/.wav using WMP. I've chosen irrKlang modplayer BASS to play my mod music files. [For more info, click the modplayer link] It was hard to find out how this would work all by myself, but I succeeded. Steps I've done for irrKlang are stated below as reference:
In the extracted irrKlang folder:
In your Visual Studio Project, do the following: In the Solution Explorer, rightclick 'References' and select Add Reference... Select the Browse tab, search and select your .NET dll (irrKlang.NET4.dll in my case) and add it.
Add this line to the top of your project:
using IrrKlang;
Then we can go to the actual code that loads and plays our mod music file. Refer to the first point (view a .nfo file correctly) of 5) The RICHTEXTBOX. We've used the Form1_Load( ) method there to read and convert the .nfo file and display it when the Form loads. The same thing we'll do with our music when the application starts. Paste the following line just before the Form1_Load( ) method:
Form1_Load( )
IrrKlang.ISoundEngine engine = new IrrKlang.ISoundEngine();
The reason I use it before the Form loads, is because I need this variable in more methods (also to pause it)! In the Form1_Load( ) method, we add the following line:
engine.Play2D("C:\\Test\\Xaser-Aeolus.it", true);
This means we play a file on the computer, so as long as I can't include it in the project, you have to send it with the application and make sure it copies/extracts to the correct folder. Otherwise the application runs, but without music and possibly NFO file from the same folder. Steps I've done for BASS:I'm able to play my music from resource now, so this is easier and always works.Download the BASS and Bass.Net dlls and add the files to the project, or copy them from my source. In Solution Explorer, go to references and add Bass.Net.dll as a reference. Unless you add them to resource, you have to put them with the final .exe as well. When the dlls are added and the reference is made, add the following line to the top:
using Un4seen.Bass;
I have a directory 'Resources' in my project and rightclick -> added my mod music file. Set properties of it: Build Action: Embedded Resource. Using the following code in Form1_Load( ) I can play that track from resource using BASS on startup:
Build Action
// LOAD MUSIC
BassNet.Registration("xxxxxxxx@xxxxxxxx.xx", "XXXXXXXXXXXXXXX"); // Source activated.
Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT, IntPtr.Zero);
// "Projectname.(OptionalFolder.)ResourceFileName.Extension"
using (var manifestResourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream("WindowsFormsApplication1.Resources.Saga Musix - Heaven.it"))
// use //string[] names = System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceNames(); in debug to test the proper path to resource.
{
if (manifestResourceStream != null)
{
int length = (int)manifestResourceStream.Length;
byte[] buffer = new byte[length];
manifestResourceStream.Read(buffer, 0, length);
int load = Bass.BASS_MusicLoad(buffer, 0, length, BASSFlag.BASS_SAMPLE_LOOP, 0);
Bass.BASS_SetConfig(BASSConfig.BASS_CONFIG_GVOL_MUSIC, vol);
Bass.BASS_ChannelPlay(load, false);
}
}
Refer to 3) The CHECKBOX [Pause Music] to see how to pause the running mod music and continue it.
Here I place the things I've changed to let it appear nicely on other places on the computer.
Default .exe Filename: We go to the Solution Explorer and rightclick our project and then Properties. Select the 'Application' tab and change the Assembly Name to whatever you want for the exe.As I'm in BETA, it doesn't really matter, I change it to character name to see a difference ('Angel').If you set this, you don't have to change the name of the exe manually after you've debugged (exported) the .exe with the default 'TestApplication.exe' name.
Process Description: I noticed when running my program, that the description in Taskmanager is WindowsFormsApplication1? That's not what I want, I want something like (Argaï) NFO Viewer! To change the Process Description, go to the Solution Explorer, under YourProjectName you see the directory 'Properties'. In that directory, rightclick the AssemblyInfo.cs and click 'View Code'. [assembly: AssemblyTitle("WindowsFormsApplication1")], change the Title in what you want and it will show that title as the Description when your Process is running!
Taskbar and Taskmanager appearance: As informed before, changing the Property Text of the Form will create the 'program name' which appears when you hover over the running icon on the taskbar. The Assembly Name that is changed will present the .exe name that will be shown as process in Taskmanager.
In just a few weeks I created this. The basic stuff was easy to do, but to make my richTextBox autoscroll and the way I wanted it (looped) was hard, even as changing the colors with it. But also disabling all interaction with it, without disabling the richTextBox itself was kind of hard to find out. I worked using trial and error and still do. Using some code samples from people online to achief this was really hard, as I always got errors of 'missing references' etc... I'm happy I got that all working now!
Changes after first upload:
- Assembly Name changed;- Process description changed [See 11) Additional Settings];- Tabindex of controls changed;- Focus added to Form (for general KeyPress option);
Tabindex
- Added 3 buttons for autoscroll speedoptions [See 7) Buttons/KeyPress];- Added 3 KeyPress options to do the same as the buttons [See 7) Buttons/KeyPress]; I prefer the keypress, but had to test with buttons first and fix the focus etc...
buttons
- Added the .NFO text file as a resource to the project so people don't need to extract it to specific location anymore. I still try to add the music and dll file as a resource as well. (See the 'use a .nfo file as a resource' part of 5) The RICHTEXTBOX);
- Pressing arrow keys won't allow you to enter richTextBox anymore. (See 1) The FORM);
- Made Icon rounder (used circle shape instead of manual removing all around the circle);- Better documented and ordened the Source Code for own reference;
- Changed ModPlayer 'irrKlang' to 'BASS', which supports all tracks that first didn't work and it uses lighter DLLs which make my application easier to run on more systems (See 'Steps I've done for BASS' of 10) Music (mod format);- Removed the close button and added more KeyPresses (Escape to close application) (See move around of 1) The FORM);- Added a fade effect when closing the application, this also fades the music with it to create a smooth closing (See Fade Effect of music and app of 1) The FORM);- The Music is now Resource! NO need to put it all on C:\Test anymore, it runs and works everywhere (See 'Steps I've done for BASS' of 10) Music (mod format));- Changed the default startlocation, now the application always starts in center of screen (See Properties of 1) The FORM);- Added a questionmark button which opens a simple 'About Window'. This displays the options of the application. While this window is active, you can't interact with the application itself. If you close the application, the questionmarkbutton will be disabled while fading (See 9) 'About' Window);- The richTextBox now has changing color functionality. It remains same speed, even if you change scrollspeed or pause the scrolling. I added a rough pattern of strong colors (See Create the 2 methods t_Tick( ) and scroll( ) of 5) The RICHTEXTBOX);- Finally I added both DLLs as resource. One can be read internally, the other one has to be written to disk. Now you can run my application anywhere without copying the DLLs with it (Refer to my source and the link about DLLs in the References at the bottom of this Article);
- Added a new Form as 'Shield' which prevents interaction with the richTextBox. Now the clickbug of v1.1/1.2 is solved. It does however take the focus of the application away, so KeyPresses won't work untill you click the application again. The shield also remains on top of any other application you run, even if my app is in background! If there's another solution I'll look into that;- Flickering application when dragging outside of the screen has been solved! (See Properties of 1) The FORM);- 'Line Jumping Bug' fixed, text now remains on same position instead of jumping to last passed line while updating colors when users pause the scroller;
- Better documented and ordened the Source Code for own reference;- Changed Project Name, so all my different designs also have different names!
<code>
<h2>Bugs</h2>
<p>- Windows 8 (always?) contains empty background of <code>checkBox
Link to original code for custom Form without any Border and default Buttons: to original autoscroll code which I used and changed: to my Question 'how to use .nfo as resource': to the 'add DLL as resource' which I implemented partly:. | https://www.codeproject.com/Articles/505994/Creating-a-NFO-Viewer-in-Csharp-as-beginner | CC-MAIN-2017-22 | refinedweb | 5,435 | 67.76 |
0
Hi, i just don't get why this isn't working.
It is supposed to read from the end of a .txt and up, until it hits the first space but
it just keeps repeating the last character in the text file ip.txt
Any help is much appreciated.
#include <iostream> #include <fstream> using namespace std; int main() { string ip = "a"; char chr; ifstream iptilvar; iptilvar.open ("ip.txt"); //open filestram for (int i=-1;ip!=" ";i--) //keep reading until a space is found { iptilvar.seekg (i, ios::end); //set the getpoint to eof -i getline (iptilvar,ip); chr = ip[0]; //get the first character of the string called ip ip = chr; cout<< ip; } iptilvar.close (); return 0; }
Edited by Bladtman242: n/a | https://www.daniweb.com/programming/software-development/threads/242236/problem-with-loop-and-ifstream-seekg | CC-MAIN-2018-09 | refinedweb | 124 | 75.91 |
Databinding and the Avalon UI
Graphics, Animation, and Splines
Graphics objects, those from the MSAvalon.Windows.Shapes namespace, can be manipulated through a number of different types of animations. Some of these were discussed in the last article, "XAML Custom Controls and Animation." However, the animation itself can be changed in the way it is executed.
The InterpolationMethod property of animation classes, such as the LengthAnimation in the provided sample, allows the Longhorn developer to change the way in which the animation will run. Possible choices for InterpolationMethod are Discrete, Linear, Paced, and Spline. Of these, Spline holds much potential for a majority of applications relying on animation.
The sample code provided binds a splined interpolation to a LengthAnimation, by using LengthKeyFrames. These key frames can mark the point on the spline where the animation should be when the timeline is on the key frame. For example, if the code says that the Length property is animated from 0 to 50, with a keyframe at 25, frames 0-24 and 26-50 can be interpolated automatically through the framework.
DataBinding and Manipulation
I was amazed when I came across the Transformer property of the Bind tag in XAML. This simple tag can allow you to do complex data manipulation post-binding. For example, if your data is bound to an XMLDataSource (such as in the provided example, you have the option of changing that data before it reaches the actual User Interface.
Because the OS and API are in very early stages, some of this may change, such as the location of the data source, and where a binding context is set, but for now, this provides an accurate (and working) sample of the things to come. First, for our canvas object, we are going to bind it to data by doing three things: putting a data source reference in the canvas' resources, adding a transformer, and adding a control to bind the transformed data to.
<Canvas ID="MainPanel"> <Canvas.Resources> <XmlDataSource def: <Notes xmlns=""> <Note><Type>quarter</Type></Note> </Notes> </Canvas.Resources> </Canvas>
Here is a simple way of adding our data right into the user interface, mostly for static data. In practice, the data would most likely be inserted programmatically, and just referenced with the XAML tag. The above example illustrates the use of XPath navigation tags to bind to specific locations within XML. It says that the data binding is to the root of the Notes node off of the absolute root of the XML document.
<Canvas.DataContext> <Bind DataSource="{SongData}" Path="Note" BindType="OneWay" /> </Canvas.DataContext>
This tag, added to the MainPanel canvas, sets the context for all databound controls within that canvas. It further specifies a deeper location within the DataSource's root (defined by the XPath property). The DataSource property on the binding is in the same format as references to styles, or any other sort of object defined within the XAML document. By surrounding the ID, or in this case the def:Name with curly-braces, the XAML parser knows to look in the resources for the appropriate object.
<TransformerSource def:
A TransformerSource also needs to be added to the MainPanel canvas' resources if the data is not to be displayed verbatim from the DataSource. In our example, we are taking a code for a note type, such as "quarter," and displaying it as a proper string "Quarter Note." The TypeName refers to the class that implements an IDataTransformer interface, as will be discussed later.
<SimpleText> <SimpleText.Text> <Bind Path="Type" Transformer="{NoteTransformer}" /> </SimpleText.Text> </SimpleText>
This binds the data from the DataContext through the NoteTransformer class into the text property of the SimpleText control. This can be useful for any number of things, from data cleaning (imagine online message boards running their comment data source through a "clean language" filter) to formatting (different colors of an item based on prices for an online retailer's smart client application), to language translation for accessibility.
Inside the codebehind, one needs to declare the NoteTransformer class for the example to work properly. The class must inherit from IDataTransformer, and have methods for InverseTransform, and Transform. Transform takes the data from the source and transforms it for the target control. InverseTransform does the opposite; it transforms data from the target to the source.
The truly important things passed to the transform class are the object, which is the actual bound information from the data source (in this case, this is a string); and the DependencyProperty. The DependencyProperty provides a way for a single transformer to act differently based on which control property the data was bound to. For example, a price could be bound to both the color and the text properties of a SimpleText control. When bound to the color, the transformer could change a low price to green, and a high price to red. At the same time, the same transformer could take a price and render it to the client's localized currency format and display that as the text in the control.
There are very interesting things that can be done through both data binding and transformation, including localization of data based on simple XAML.
A Look Ahead
Next month, we'll get a closer look at the Longhorn Speech API, and see how you can develop a simple speech-to-text application.
One Last Note
All of the provided code compiles under PDC builds of Longhorn (4051), Whidbey (m2.030828-1205), .NET Framework (1.2.30703), and the Longhorn SDK. The sample applications provided, DTransform, and AnimPath, show what can be done by implementing the techniques mentioned in the article.
| http://www.developer.com/net/net/article.php/3333071/Databinding-and-the-Avalon-UI.htm | CC-MAIN-2015-48 | refinedweb | 939 | 51.89 |
DateTime::Event::Predict - Predict new dates from a set of dates
Given a set of dates this module will predict the next date or dates to follow.
use DateTime::Event::Predict; my $dtp = DateTime::Event::Predict->new( profile => { buckets => ['day_of_week'], }, ); # Add today's date: 2009-12-17 my $date = new DateTime->today(); $dtp->add_date($date); # Add the previous 14 days for (1 .. 14) { my $new_date = $date->clone->add( days => ($_ * -1), ); $dtp->add_date($new_date); } # Predict the next date my $predicted_date = $dtp->predict; print $predicted_date->ymd; # 2009-12-18
Here we create a new
DateTime object with today's date (it being December 17th, 2009 currently). We then use add_date to add it onto the list of dates that
DateTime::Event::Predict (DTP) will use to make the prediction.
Then we take the 14 previous days (December 16-2) and them on to same list one by one. This gives us a good set to make a prediction out of.
Finally we call predict which returns a
DateTime object representing the date that DTP has calculated will come next.
Predicting the future is not easy, as anyone except, perhaps, Nostradamus will tell you. Events can occur with perplexing randomness and discerning any pattern in the noise is nigh unpossible.
However, if you have a set of data to work with that you know for certain contains some sort of regularity, and you have enough information to discover that regularity, then making predictions from that set can be possible. The main issue with our example above is the tuning we did with this sort of information.
When you configure your instance of DTP, you will have to tell what sorts of date-parts to keep track of so that it has a good way of making a prediction. Date-parts can be things like "day of the week", "day of the year", "is a weekend day", "week on month", "month of year", differences between dates counted by "week", or "month", etc. Dtpredict will collect these identifiers from all the provided dates into "buckets" for processing later on.
Constructor
my $dtp = DateTime::Event::Predict->new();
Arguments: none | \@dates
Return value: \@dates
Called with no argument this method will return an arrayref to the list of the dates currently in the instance.
Called with an arrayref to a list of DateTime objects (
\@dates) this method will set the dates for this instance to
\@dates.
Arguments: $date
Return value:
Adds a date on to the list of dates in the instance, where
$date is a DateTime object.
Arguments: $profile
Set the profile for which date-parts will be
# Pass in preset profile by its alias $dtp->profile( profile => 'default' ); $dtp->profile( profile => 'holiday' ); # Create a new profile my $profile = new DateTime::Event::Predict::Profile( buckets => [qw/ minute hour day_of_week day_of_month /], ); $dtp->profile( profile => $profile );
The following profiles are provided for use by-name:
Arguments: %options
Return Value: $next_date | @next_dates
Predict the next date(s) from the dates supplied.
my $predicted_date = $dtp->predict();
If list context
predict returns a list of all the predictions, sorted by their probability:
my @predicted_dates = $dtp->predict();
The number of prediction can be limited with the
max_predictions option.
Possible options
$dtp->predict( max_predictions => 4, # Once 4 predictions are found, return back callbacks => [ sub { return ($_->second % 4) ? 0 : 1 } # Only predict dates with second values that are divisible by four. ], );
Maximum number of predictions to find.
Arrayref of subroutine callbacks. If any of them return a false value the date will not be returned as a prediction.
Train this instance of DTP
Brian Hann,
<brian.hann at gmail.com>
Please report any bugs or feature requests to
bug-datetime-event-predict::Predict
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
DateTime, DateTime::Event::Predict::Profile | http://search.cpan.org/~bhann/DateTime-Event-Predict-0.01_04/lib/DateTime/Event/Predict.pm | CC-MAIN-2016-30 | refinedweb | 643 | 57.71 |
Brute Force
Giorgos Mourtasagas
Greenhorn
Joined: Nov 04, 2009
Posts: 18
posted
Mar 13, 2010 15:29:30
0
Hi"); } } } } }
But as you see thai is onle 4 characters and I want 8.
I believe that there is another way to make without to make 8
for
.
If anyone have an idea help me.
Nathan Leniz
Ranch Hand
Joined: Nov 26, 2006
Posts: 132
posted
Mar 14, 2010 03:38:58
0
I would ask why you'd want to brute force something like this, because choosing 4 out of a set of 36 produces only 82,251 combinations while choosing 8 produces a much more respectable 145,008,513. All that is assuming you'll be matching a
string
in which the order doesn't matter. I see in your example you are using it as a "password" which would mean that given the same set of 36 characters and trying to brute force a password (which should, but not always, result in the order mattering) consisting of 8 characters, it could take you set^subset attempts, in this case a staggering 2,821,109,907,456 total permutations.
My math is a bit rusty and the numbers might be off, but Brute Forcing something like that is astronomical. If it's still something you want to give a shot at, the Google bot returned this link for me.
The very existence of flamethrowers proves that at some time, some where, some place, someone once said to themselves "I'd really like to set those people on fire over there, but I just can't get close enough".
Giorgos Mourtasagas
Greenhorn
Joined: Nov 04, 2009
Posts: 18
posted
Mar 14, 2010 08:40:42
0
I want to make it because it's my homework.
I have a zip file with AES encryption and I want to find out the password with brute force and dictionary attack.
I have made it with dictionary attack but with brute force I have a problem.
I understand that is useless but I must done it.
Giorgos Mourtasagas
Greenhorn
Joined: Nov 04, 2009
Posts: 18
posted
Mar 19, 2010 07:03:08
0
I wrote this program but I have a little problem.
I don't know how to convert the elements of an integer array to a string array to make the comparison and I found the password.
But I don't know how.I found examples but it isn't with arrays.It is only convert an integer to a string.
Here is the code.
import java.io.*; import com.javamex.arcmexer.*; import java.*; class Brutus4 { public static void main(String args[]){ char charSet[] = {'0','1','2','3','4','5','6','7','8','9','a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j','k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't','u', 'v', 'w', 'x', 'y', 'z'}; try { FileInputStream f = new FileInputStream("C://ReadZip.zip"); ArchiveReader r = ArchiveReader.getReader(f); ArchiveEntry entry = r.nextEntry() ; int index[] = new int[8]; for (int c0 = 0; c0 <charSet.length; c0++) { index[0] = c0; for (int c1 = 0; c1 <charSet.length; c1++) { index[1] = c1; for (int c2 = 0; c2 <charSet.length; c2++) { index[2] = c2; for (int c3 = 0; c3 <charSet.length; c3++) { index[3] = c3; for (int c4 = 0; c4 <charSet.length; c4++) { index[4] = c4; for (int c5 = 0; c5 <charSet.length; c5++) { index[5] = c5; for (int c6 = 0; c6 <charSet.length; c6++) { index[6] = c6; for (int c7 = 0; c7 <charSet.length; c7++) { index[7] = c7; String s = Integer(index).toString(); /*here is the problem*/ System.out.println(s); if (entry.isProbablyCorrectPassword(s)){ System.out.println("the paswd found"); }else{ System.out.println("the pswd isn't found"); } } } } } } } } } }catch (Exception e){ System.out.println("Exception raised!"); e.printStackTrace(); } } }
Wouter Oet
Saloon Keeper
Joined: Oct 25, 2008
Posts: 2700
I like...
posted
Mar 19, 2010 07:25:04
0
The key word is permutations.
Here
is an example.
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler
Please correct my English.
I agree. Here's the link:
subject: Brute Force
Similar Threads
from a 2d array to a regular array....
Loops
Problem with brute Force
Random string generation
Please help in this selection sort
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/487086/java-io/java/Brute-Force | CC-MAIN-2014-42 | refinedweb | 734 | 73.17 |
This chapter introduces the Pandas library (or package). panda is a package built using NumPy (pronounced 'numb pie').
Up until this point, the examples seen in this notebook utilize Python's built-in types and functions. NumPy has its ndarray object for array arithmetic. NumPy is a package built to support scientific computing in Python. We will illustrate a few useful NumPy objects as a way of illustrating pandas
panda was developed to support data analysis with added flexibility than offered by the ndarray object in NumPy. For data analysis tasks we often need to group dis-similar data types together. An examples are categorical data using strings, frequencies and counts using ints and floats for continuous values. In addition, we would like to be able to attach labels to columns, pivot data, and so on.
We begin by introducing the Series object as a component of the DataFrame objects. panda components to those found in SAS.
panda have three main data structures:
1. Series 2. DataFrame 3. Index
Indexes are covered in detail in Chapter 6, Understanding Indexes.
import numpy as np import pandas as pd from numpy.random import randn from pandas import Series, DataFrame, Index from IPython.display import Image
A Series can be thought of as a one-dimensional array with labels. This structure includes an index of labels used as keys to locate values. Data in a Series can be any data type. panda data types are covered in detail here . In the SAS examples, we use Data Step ARRAYs as an analog to the Series.
Start by creating a Series of random values.
s1 = Series(randn(10)) print(s1.head(5))
0 1.470961 1 0.724744 2 -1.601498 3 0.201619 4 -1.106859.
/******************************************************/ /* c04_array_random_values.sas */ /******************************************************/ 4 data _null_; 5 6 call streaminit(54321); 7 8 array s2 {10} ; 9 do i = 1 to 10; 10 s2{i} = rand("Uniform"); 11 12 if i <= 5 then put 13 s2{i}; 14 end; 0.4322317772 0.5977982976 0.7785986473 0.1748250183 0.3941470125
s2 = Series(randn(10), index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) print(s2.head(5))
a 1.057676 b 0.154904 c -1.358674 d -1.661059 e 2.111150 dtype: float64
The Series is indexed by integer value with the start position at 0.
print(s2[0])
1.05767568654
The SAS example uses a DO loop as the index subscript into the array.
/******************************************************/ /* c04_return_1st_array_element.sas */ /******************************************************/ 4 data _null_; 5 6 call streaminit(54321); 7 8 array s2 {10} ; 9 do i = 1 to 10; 10 s2{i} = rand("Uniform"); 11 12 if i = 1 then put 13 s2{i}; 14 end; 0.4322317772
Return the first 3 elements in the Series.
print(s2[:3])
a 1.057676 b 0.154904 c -1.358674 dtype: float64
/******************************************************/ /* c04_return_first_3_array_elements.sas */ /******************************************************/ 20 data _null_; 21 22 call streaminit(54321); 23 24 array s2 {10} ; 25 do i = 1 to 10; 26 s2{i} = rand("Uniform"); 27 28 if i <= 3 then put 29 s2{i}; 30 end; 0.4322317772 0.5977982976 0.7785986473
The example has two operations. The s2.mean() method calculates mean followed by a boolen test less than this calculated mean.
s2[s2 < s2.mean()]
c -1.358674 d -1.661059 g -0.426155 h -0.841126 dtype: float64
Series and other objects have attributes using a dot (.) chaining-style syntax. .name is one a number of attributes for the Series object.
s2.name='Arbitrary Name' print(s2.head(5))
a 1.057676 b 0.154904 c -1.358674 d -1.661059 e 2.111150 Name: Arbitrary Name, dtype: float64 -- Panda Readers
Start by reading the UK_Accidents .csv file. It contains vehicular accident data in the U.K from January 1, 2015 to December 31, 2015. The .csv file is located here.
There are multiple reports for each day of the year. The values are mostly integer values using the Road-Accident_Safety-Data_Guide.xls file found here to map values to descriptive labels.
The default values are used in the example below. Pandas provide a number of readers having parameters for controling missing values, date parsing, line skipping, data type mapping, etc. These parameters are analogous to SAS' INFILE/INPUT processing.
Additional examples of reading various data inputs into a DataFrame are covered in Chapter 11 -- Panda Readers
Notice the.
/******************************************************/ /* c04_read_csv_proc_import.sas */ /******************************************************/ 5 proc import datafile='c:\data\uk_accidents.csv' out=uk_accidents; report respectively, number of cells, rows/columns, and number of dimensions are shown below.
print(df.size, df.shape, df.ndim)
7202952 (266776, 27) 2
df.info()
<class 'pandas.core.frame.DataFrame'>
Image(filename='Anaconda3\\output\\contents1.JPG')
Image(filename='Anaconda3\\output\\contents2.JPG')
panda have methods which used to inspect to:
proc print data=uk_accidents (firstobs = 266756);
df.head()
5 rows × 27 columns
OBS=n in SAS determines the number of observations used as input.
/******************************************************/ /* c_04_display_1st_5_obs.sas */ /******************************************************/ 39 proc print data = uk_accidents (obs=5); The output from PROC PRINT is not displayed here.
This example uses the slicing operator to request columns by labels. Slicers work along rows as 'sex_of_driver' and 'time'.
/******************************************************/ /* c04_scoping_obs_and_variables.sas */ /******************************************************/ 40 proc print data = uk_accidents (obs=10); 41 var sex_of_driver time; The output from PROC PRINT is not displayed here.
Before analyzing data a common task is dealing with missing data. pandas uses two designations to indicate missing data, NaN (not a number) and the Python None object.
Consider cells #15, #16, and #17 below. Cell #15 uses the Python None object to represent a missing value in the array. In turn, Python infers the data type for the array to be an object. Unfortuantely, the use of a Python None object with an aggregation function for arrays raises an error. Cell #17 addresses the error raised in cell #16.
s1 = np.array([32, None, 17, 109, 201]) s1
array([32, None, 17, 109, 201], dtype=object)
s1.sum()
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-16-b615dd188243> in <module>() ----> aliviate
Contrast the Python program in the cell above for calculating the mean of the array elements with the SAS example below. SAS excludes the missing value and utilizes the remaining array elements to calculate a mean.
/******************************************************/ /* c04_mean_of_array_elements.sas */ /******************************************************/ 4 data _null_; 5 6 array s2 {5} (32 . 17 109 201); 7 avg = mean(of s2[*]); 8 9 put avg; 89.75 cell #19 separate output is produced for each variable. As with the example in cell # 19 above, the 'time' variable is the only variable with missing values.
/******************************************************/ /* c04_find_missing_numerics_characters.sas */ /******************************************************/ 26 proc format; 27 value $missfmt ' '='Missing' other='Not Missing'; 28 value missfmt . ='Missing' other='Not Missing'; 29 run; 30 31 proc freq data=uk_accidents; 32 format _CHARACTER_ $missfmt.; 33 tables _CHARACTER_ / missing missprint nocum nopercent; 34 35 format _NUMERIC_ missfmt.; 36 tables _NUMERIC_ / missing missprint nocum nopercent;
Image(filename='Anaconda3\\output\\freq.JPG')
Another method for detecting missing values is to search column-wise by using the axis=1 parameter to the chained attributes .isnull().any(). The operation is then performed along columns.
null_data = df[df.isnull().any(axis=1)] null_data.head()
5 rows × 27 columns) cell #28 above, the .fillna() method is applied to all DataFrame cells. We may not wish to have missing values in df['col2'] replaced with zeros since they are strings. The method is applied to a list of target columns using the .loc method. The details occurences" replacing missing values with &col6_mean.
A more detailed example of replacing missing values with group means is located here.
SAS/Stat has PROC MI for imputation of missing values with a range of methods described here. PROC MI is outside the scope of these examples.
/******************************************************/ /* c04_replace_missing_with_mean_values.sas */ /******************************************************/ 4 data df; 5 infile cards dlm=','; 6 7 input col1 $ 8 col2 $ 9 col3 10 col4 11 col5 12 col6 ; 13 14 datalines; 15 cold, slow, ., 2, 6, 3 16 warm, medium, 4, 5, 7, 9 17 hot, fast, 9, 4, ., 6 18 cool, , ., ., 17, 89 19 cool, medium, 16, 44, 21, 13 20 cold, slow, . ,29, 33, 17 21 ;;;; 22 proc sql; 23 select mean(col6) into :col6_mean 24 from df; 25 quit; 26 27 data df2; 28 set df; 29 array x {3} col3-col5 ; 30 31 do i = 1 to 3; 32 if x(i) = . then x(i) = &col6_mean; 33 end;
The .fillna(method='ffill') is a 'forward' fill method. NaN's are replaced by the adjacent cell above traversing 'down' the columns. Cell #32 below constrasts the DataFrame df2, created in cell #24 above with the DataFrame df9 created with the 'forward' fill method.
df9 = df2.fillna(method='ffill') display("df2", "df9")
df2
df9
Simalarly, the .fillna(bfill) is a 'backwards' fill method. NaN's are replaced by the adjecent cell traversing 'up' the columns. Cell #32 constrasts the DataFrame df2, created in cell #23 above with the DataFrame df10 created with the 'backward' fill method.
df10 = df2.fillna(method='bfill') display("df2", "df10")
df2
df10
Cell #34 contrasts DataFrame df9 created in cell #32 using the 'forward' fill method with DataFrame df10 created in cell #33 with the 'backward' fill method.)
10 Minutes to pandas from pandas.pydata.org.
Tutorials , and just below this link is the link for the pandas Cookbook, from the pandas 0.19.0.0 documentation. | https://nbviewer.jupyter.org/github/RandyBetancourt/PythonForSASUsers/blob/master/Chapter%2004%20--%20Pandas%2C%20Part%201.ipynb | CC-MAIN-2020-40 | refinedweb | 1,540 | 68.57 |
It's ironic I read your message seconds after the first commit of my very
own Jaxb task (see below).... Thanks! I'm checking out the SUN's task as I
write this. --DD
/**
* Compiles using SUN's JAXB Java & XML Data Binding framework
* a DTD with optionally a binding schema (XJS file).
* <p>
* Uses the <dependset> and <java> task internally to do most
* of the work. Neither the DTD nor the binding schema are parsed, but
instead
* the output of the Jaxb compiler is parsed to <em>learn</em> a posteriori
* which files this combination of DTD and XJS files yields. This
information
* is then cached for subsequent run to avoid unnecessary re-generation of
the
* generated Java files. When lacking this cached information, this task
* always runs the Jaxb compiler again.
*/
public class JaxbSchemaCompiler ...
-----Original Message-----
From: Glenn Kidd [mailto:GKidd@placeware.com]
Sent: Tuesday, October 15, 2002 1:49 PM
To: 'Ant Users List'
Subject: RE: Jaxb task anyone?
This was posted 3 weeks ago. I just noticed that sun releases a beta
implementation of JAXB ( -
requires logon to download). In it there is an ant task for the JAXB xjc.
Hope this helps.
Glenn
-----Original Message-----
From: Dominique Devienne [mailto:DDevienne@lgc.com]
Sent: Tuesday, October 01, 2002 1:22 PM
To: 'ant-user@jakarta.apache.org'
Subject: Jaxb task anyone?
Has anyone wrapped SUN's jaxb com.sun.tools.xjc.Main into a task?
Or are people simply using <java> and/or <apply>?
Thanks, --DD
--
To unsubscribe, e-mail: <mailto:ant-user-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-user-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-user/200210.mbox/%3CD44A54C298394F4E967EC8538B1E00F13D4573@lgchexch002.lgc.com%3E | CC-MAIN-2018-43 | refinedweb | 273 | 60.72 |
It seems std::is_literal_type was removed from C++ 20, so now GCC is warning that we use it in wtf/Variant.h:
DerivedSources/ForwardingHeaders/wtf/Variant.h:390:35: warning: 'template<class _Tp> struct std::is_literal_type' is deprecated [-Wdeprecated-declarations]
390 | template<typename _Type,bool=std::is_literal_type<_Type>::value>
| ^~~~~~~~~~~~~~~
This header is included in a lot of places, so it spams the build log pretty badly. Sadly, std::is_literal_type was removed without replacement. More info here:. So solution is a little unclear. I might just suppress the warning with IGNORE_WARNINGS and keep the current behavior... other proposals welcome.
<rdar://problem/73509470>
Maybe we can look for newer Variant implementations?
We added WTF::Variant to get something like std::variant earlier. Like WTF::Optional, some of us intended this as a stopgap, and eventually we’d just move along to std::variant. But in the case of WTF::Optional, we grew to like at least one of the semantics we chose for our version, and may delay switching over indefinitely. It’s possible that is not try for WTF::Variant, so perhaps we can just switch over to std::variant?
(In reply to Darin Adler from comment #2)
> It’s possible that is not try for WTF::Variant, so perhaps we can just switch over to std::variant?
You're right. We should give this a try.
Here are my preferences for solutions to this:
- switch to std::variant
- update to (or maybe it’s "merge in") a newer version of the variant implementation; in r204227, Sam Weinig got it from and we could look there for a newer version
- disable this warning just for Variant.h
- patch our Variant.h to sidestep the lack of is_literal_type (might be straightforward to do it at least well enough to not break WebKit?)
- disable this warning globally
We can also do any combination of these, and do one first and then move up to the "better solutions" later. I’d like to climb from the bottom up to the top as far as we can go.
(In reply to Darin Adler from comment #4)
> - disable this warning just for Variant.h
I'm going to take the path of least resistance and disable the warning... but it only needs to be done for the one function where it's used, not for the entire file.
(No patch yet because I got sucked into a quixotic quest to see how hard it would be to switch to std::variant after all.)
It's not necessarily hard, but we use it in a lot of places, and it requires more time than I have available to adjust them all.
I also attempted to hoist std::variant and its related methods (std::get, std::visit, std::holds_alternative) into the WTF namespace, but failed at that too. We also have one customization, WTF::switchOn, that could justify keeping wtf/Variant.h even if we were to successfully migrate to std::variant.
Next I tried adjusting the code to no longer use std::is_literal_type, but after squinting for a while I decided I'd better not touch it.
I tried searching for an updated version of our variant, but the upstream no longer exists.
I tried searching for a new upstream that we could use. looks like a good option, and it's license-compatible. One compiler warning seems like not a very great reason to switch from one implementation to another, though.
I'm just going to silence the warning. This function is removed from C++ 20, but I assume libstdc++ and libc++ will both keep it around forever. If it ever gets removed, then we'll need to revisit.
(In reply to Michael Catanzaro from comment #7)
> I'm just going to silence the warning.
Good call, thumbs up.
I’m eager to *eventually* get to std::variant and also to std::optional and I hope someone will do those projects.
Created attachment 422138 [details]
Patch
Comment on attachment 422138 [details]
Patch
View in context:
> Source/WTF/ChangeLog:6607
> - use the RunLoop. Let's match them for consistency, and to delete some
> + As of, Darwin platforms
> +.
(In reply to Darin Adler from comment #8)
> Good call, thumbs up.
>
> I’m eager to *eventually* get to std::variant and also to std::optional and
> I hope someone will do those projects.
I would like to use std::optional too, but I think WTF::Optional has some important subtle behavior difference that our code may rely on in an unknown number of places. But I cannot remember what it is! We had a long webkit-dev mailing list conversation about it a few years ago, but after searching our list archives for two minutes, I wasn't immediately able to find it.
In contrast, switching to std::variant should *probably* be fairly safe.
Comment on attachment 422138 [details]
Patch
View in context:
>> Source/WTF/ChangeLog:6607
>> +.
Seems fine to do it this way.
I don’t really understand the role of "gedit" here, but the patch seems fine.
gedit is the GNOME text editor!
WTF::Optional has a difference from some std::optional implementations: the move assignment operator sets the old object to WTF::nullopt. This difference is something we may have relied on in the past but we do not have solid evidence that we still rely on it; in fact when testing a patch to convert to std::optional I saw no test failures. In new code we try to use std::exchange when we rely on that behavior.
Committed r273841: <>
All reviewed patches have been landed. Closing bug and clearing flags on attachment 422138 [details]. | https://bugs.webkit.org/show_bug.cgi?id=220662 | CC-MAIN-2022-27 | refinedweb | 940 | 64.1 |
Question :
Hello I am quite new to pygame and I am trying to make an intro for a game where the user hovers over the computer screen and the screen then turns blue so indicate that when you press it the game will start. However, the blue rect simply isn’t showing up?
By the way, the introScreen is a like a gif but constructed from many different frames.
This is my code:
import pygame import pygame.gfxdraw import threading pygame.init() width = 800 height = 600 fps = 30 clock = pygame.time.Clock() white = (255, 255, 255) black = (0, 0, 0) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) screen = pygame.display.set_mode((width, height)) def introLoop(): while intro: for i in range(0, 26): clock.tick(8) i = str(i) introScreen = pygame.image.load("introScreen/" + i + ".gif") introScreen = pygame.transform.scale(introScreen, (width, height)) screen.blit(introScreen, (30, 30)) pygame.display.flip() def gameLoop(): while intro: mouseX, mouseY = pygame.mouse.get_pos() startRectArea = pygame.Rect(279, 276, 220, 128) if startRectArea.collidepoint(mouseX, mouseY): StartRect = pygame.draw.rect(screen, blue, (279, 276, 220, 128), 0) pygame.display.update() for event in pygame.event.get(): holder = 0 introThread = threading.Thread(target = introLoop) gameThread = threading.Thread(target = gameLoop) intro = True introThread.start() gameThread.start()
There is no error message it just doesn’t display the blue rect?
Please help as I need this for a school project.
Answer #1:
To use multithreading in PyGame, you’ve to consider 2 things:
Process an event loop in the main thread
Do all the drawing on and the update of the window in a single thread. If there is an “intro” thread and a “game” thread, then the “intro” thread has to terminate first, before the “game” thread can start.
In the main thread (program) you’ve to declare the threads and to initialize the control (state) variables. Start the “intro” thread immediately and run the event loop.
I recommend to implement at least the
pygame.QUIT, which signals that the program has to be terminated (
run = False).
Further it can be continuously checked if the mouse is in the start area (
inStartRect = startRectArea.collidepoint(*pygame.mouse.get_pos())):
introThread = threading.Thread(target = introLoop) gameThread = threading.Thread(target = gameLoop) run = True intro = True inStartRect = False startRectArea = pygame.Rect(279, 276, 220, 128) introThread.start() while run: if intro: inStartRect = startRectArea.collidepoint(*pygame.mouse.get_pos()) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False elif event.type == pygame.MOUSEBUTTONDOWN and inStartRect: intro = False
In the intro thread the entire intro screen has to be drawn, including the background and the blue start area. If the intro is finished (the game is started), then at the end of the intro thread the main game loop is started:
def introLoop(): i = 0 while run and intro: clock.tick(8) filename = "introScreen/" + str(i) + ".gif" i = i+1 if i < 26 else 0 introScreen = pygame.image.load(filename) introScreen = pygame.transform.scale(introScreen, (width, height)) screen.blit(introScreen, (30, 30)) if inStartRect: pygame.draw.rect(screen, blue, (279, 276, 220, 128), 0) pygame.display.flip() gameThread.start()
The game thread starts, when the intro thread terminates. So the game thread can do the drawing of the game scene:
def gameLoop(): while run: clock.tick(60) screen.fill(red) # [...] pygame.display.flip() clock.tick(60)
Answer #2:
Try passing intro directly into the
introLoop function. Pass arguments into the thread with the
args parameter…
introThread = threading.Thread(target=introLoop, args=(intro))
You will also have to redefine the function as well:
def introLoop(intro):
Answer #3:
Try converting the images you want to load with
.convert()
pygame.image.load("image file path").convert()
Or try converting the image to an image with
.convert_alpha()
pygame.image.load("image file path").convert_alpha()
Pygame finds it MUCH easier and faster to load images that are converted with
.convert(), and with
.convert_alpha() it converts the images with their alpha layers (E.G. Transparency…) Hope this helps!
for more information, see these google results:
I recommend the first link, although this link is pretty pointless | https://discuss.dizzycoding.com/why-is-this-pygame-program-not-working-so-when-i-hover-over-the-computer-screen-it-turns-blue/ | CC-MAIN-2022-33 | refinedweb | 686 | 69.99 |
Allow grouping
As cowwoc has stated grouping of exceptions, why stop there? Why not allow grouping of all types. You can speficy generic types using grouping, why not apply this to all things.
E.g.
<br /> public class GroupingTest<br /> {<br /> private static String|StringBulider data = new StringBuilder();<br /> /*produces the real value if data is not a String. I.e. we are not in a readonly state.*/<br /> public static String|StringBuilder get(boolean copy)<br /> {<br /> if(copy)<br /> return data.toString();<br /> else<br /> return data;<br /> }<br /> public static void main(String[] args)<br /> {<br /> String|StringBuilder value = get(Boolean.parse(args[0]);<br /> //value is a CharSequence<br /> System.out.println(value.length());<br /> }<br /> }<br />
And & can be used to restict values.
Message was edited by: subanark
Message was edited by: subanark
I have seen many cases where a variable has two possible types and is declared as Object. Although you would still have to use instanceof it does add some meaning directly into the type. A good example of this is when a variable "could" be an array but is more likely to be a single instance.
That's a rare optimisation. Although I did do it the other day.
(Base class for sources of a particular event. Instances likely to be produced in large numbers but typically with a single listener attached. Interestingly with the listener a part of the interested class, there are no additional objects created for the observation behaviour. If you want to represent null and none distinctly, use a shared, immutable, zero-length array.)
The technique is a hack. It shouldn't have support for the language. The code should be clearly marked as a hack. Code with ugly behaviour should look ugly.
> As cowwoc has stated grouping of exceptions, why stop
> there? Why not allow grouping of all types. You can
> speficy generic types using grouping, why not apply
> this to all things.
> E.g.
> [code]
> public class GroupingTest
> {
> private static String|StringBulider data = new
> new StringBuilder();
> /*produces the real value if data is not a String.
> ng. I.e. we are not in a readonly state.*/
> public static String|StringBuilder get(boolean
> ean copy)
> {
> if(copy)
> return data.toString();
> else
> return data;
> }
> public static void main(String[] args)
> {
> String|StringBuilder value =
> alue = get(Boolean.parse(args[0]);
> //value is a CharSequence
> System.out.println(value.length());
> }
> }
> [/code]
> And & can be used to restict values.
+0.5
I don't think it is worth the complexity for a variable type, but as an extends on an interface, I could definitely see it.
[code]
public interface SpecialString extends String|StringBuilder;
[/code]
Such an interface couldn't have its own methods (hence the ; at the end) but it could merge two classes into one type, provided they didn't have incompatable method signatures.
Im not sure if its a good idea to allow refering to a method that belongs to multiple classes which is not defined in a common interface. In these cases the classes should have but did not implement an interface (or subclass) together for one reason or another.
Why not replace String|StringBuilder with CharSequence? Problem solved.
There are obscure reasons why you might want more freedom to use intersection types (strongly typed handling of relations, as a seasonal example), but I can't see the overriding utilitiy of this suggestion. | https://www.java.net/node/643858 | CC-MAIN-2015-32 | refinedweb | 565 | 65.93 |
Virtual Scrolling
Virtual scrolling is an alternative to paging. Instead of using a pager, the user scrolls vertically through all records in the data source.
The same set of elements is reused to improve the rendering performance. While the next data is loading, a loading indicator is shown on the cells. If the user scrolls back up after scrolling down to a next set of rows, the previous data will be loaded anew from the data source, like with regular paging, but the scroll distance determines the data to be loaded.
You can also Virtually Scroll the Grid Columns. More information can be found in the Column Virtualization article.
Requirements
To enable virtual scrolling:
Set
ScrollMode="@GridScrollMode.Virtual"- this enables the virtualization of items
Provide
Height,
RowHeight, and
PageSizeto the grid - this lets the grid calculate the position of the user in order to fetch the correct set of items from the data source.
Sample of virtual scrolling in the Telerik Grid for Blazor
@* Scroll the grid instead of paging *@ <TelerikGrid Data=@GridData <GridColumns> <GridColumn Field="Id" /> <GridColumn Field="Name" Title="First Name" /> <GridColumn Field="LastName" Title="Last Name" /> <GridColumn Field="HireDate" Width="200px"> <Template> @((context as SampleData).HireDate.ToString("MMMM dd, yyyy")) </Template> </GridColumn> </GridColumns> </TelerikGrid> @code { public List<SampleData> GridData { get; set; } protected override async Task OnInitializedAsync() { GridData = await GetData(); } private async Task<List<SampleData>> GetData() { return Enumerable.Range(1, 1000).Select(x => new SampleData { Id = x, Name = $"name {x}", LastName = $"Surname {x}", HireDate = DateTime.Now.Date.AddDays(-x) }).ToList(); } public class SampleData { public int Id { get; set; } public string Name { get; set; } public string LastName { get; set; } public DateTime HireDate { get; set; } } }
How virtual scrolling looks like (deliberately slowed down to showcase the loading placeholders)
The column where long text is expected (the
Hire Datein this example) has a width set so that the text does not break into multiple lines and increase the height of the row. See the notes below for more details.
Notes
There are several things to keep in mind when using virtual scrolling:
The
RowHeightis a decimal value that is always considered as pixel values. If you use row template, make sure it matches the
RowHeight. The grid
Heightdoes not have to be in pixels, but it may help you calculate the
PageSize(see below).
If the row/cell height the browser would render is larger than the
RowHeightvalue, the browser will ignore it. It can depend on the chosen Theme or other CSS rules, or on cell data that falls on more than one row. Inspect the rendered HTML to make sure the grid setting matches the rendering.
The default grid rendering has padding in the cells, and the loading sign has a line height set in order to render. This may impose some minimum heights that can vary with the theme and/or custom styles on the page. You can remove both with the following rules:
.k-placeholder-line{display:none;} .k-grid td{margin:0;padding:0;}.
The
RowHeightmust not change at runtime, because the new dimensions will cause issues with the scrolling logic.
Browser zoom or monitor DPI settings can cause the browser to render different dimensions than the expected and/or non-integer values, which can break the virtualization logic.
Do not mix virtualization with paging, as they are alternatives to the same feature.
Provide for a
PageSizeof the Grid that is large enough, so that the loaded table rows do not fit in the scrollable data area, otherwise the vertical virtual scrollbar will not be created and scrolling will not work. To do this, take into account the
Heightof the grid and the
RowHeight.
- The
PageSizecontrols how many rows are rendered at any given time, and how many items are requested from the data source when loading data on demand (see below). You should avoid setting large page sizes, you need to only fill up the grid data viewport.
To load data on demand, use the
OnReadevent, and in it, use the
PageSizeand
Skipparameters to know what data to return, instead of
PageSizeand
Pageas with regular paging.
- Data requests will be made when the user scrolls, but not necessarily when they scroll an entire page of data. Row virtualization is a user experience and UI optimization technique and not necessarily a data request optimization. The user may scroll a few rows, or they may keep scrolling and skip many pages. The grid cannot predict the user action, so it needs to request the data when the user changes what should be displayed.
Horizontal scrolling is not virtualized by default and all columns are rendered. You can enable Column Virtualization separately too.
Multiple Selection has some specifics when you use the
OnReadevent, you can read more about its behavior in the Multiple Seletion article.
Limitations
Virtualization is mainly a technique for improving client-side (rendering) performance and the user experience. Its cost is that some features of the grid do not work with it. An alternative to that is to use regular paging with manual data source operations to implement the desired performance of the data retrieval.
List of the known limitations of the virtual scrolling feature:
Hierarchy is not supported.
Grouping is not supported. Loading Group Data On Demand is supported, however.
The
Dataof the grid must contain more items than the
PageSizein order for the virtual scrolling feature to work. You can work around this with something similar to
ScrollMode="@(DataCollection.Count() > 30 ? GridScrollMode.Virtual : GridScrollMode.Scrollable)"
- If you set the
Skipmanually through the grid state, you must ensure the value is valid and does not result in items that cannot fill up the viewport. You can find more details in the Setting Too Large Skip Knowledge Base article.
When there are too many records, the browser may not let you scroll down to all of them, read more in the Virtual Scroll does not show all items KB article. | https://docs.telerik.com/blazor-ui/components/grid/virtual-scrolling | CC-MAIN-2021-39 | refinedweb | 985 | 51.99 |
isspace()
Test a character to see if it's a whitespace character
Synopsis:
#include <ctype.h> int isspace( isspace() function tests for the following whitespace characters:
- ' '
- Space
- '\f'
- Form feed
- '\n'
- Newline or linefeed
- '\r'
- Carriage return
- '\t'
- Horizontal tab
- '\v'
- Vertical tab
The isblank() function tests for a narrower set of characters.
Returns:
Nonzero if c is a whitespace character; otherwise, zero.
Examples:
#include <stdio.h> #include <stdlib.h> #include <ctype.h> char the_chars[] = { 'A', 0x09, ' ', 0x7d, '\n' }; #define SIZE sizeof( the_chars ) / sizeof( char ) int main( void ) { int i; for( i = 0; i < SIZE; i++ ) { if( isspace( the_chars[i] ) ) { printf( "Char %c is a space character\n", the_chars[i] ); } else { printf( "Char %c is not a space character\n", the_chars[i] ); } } return EXIT_SUCCESS; }
This program produces the output:
Char A is not a space character Char is a space character Char is a space character Char } is not a space character Char is a space character
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/isspace.html | CC-MAIN-2015-40 | refinedweb | 179 | 61.26 |
The Future of Electronic Courts in Harris County Getting to No (Paper) Using Technology in the Courtroom The Court of Babel: The Multilingual Courtroom of the Future The Law Firm of the Future Houston Bar Foundation Recognizes Outstanding Efforts by Volunteers
lawyer
THE HOUSTON
inside...
Volume 50 – Number 5
March/April 2013
A Futurist’s View of the Legal Profession
The Standard of Excellence
Reno Hartfiel
Karen Highfield
Jimmy Erwin
713.238.9191 | 712 Main St., Suite 2000E | Houston, TX 77002 |
Your Full-Service Real Estate Company
Grant Park Houston
From the $440’s
Marconi Vistas
From the $690’s
Cordoba Court
From the $330’s
Wichita Landing
From the $370’s
Mount Vernon Court
From the $470’s
Montrose
Shady Acres
EaDo Skyline
$369,000
EaDo
Montrose
Museum District
Montrose
713-868-7226 5023 Washington Ave Houston, TX 77007 Inc, TREC Broker #476135
contents Volume 50 Number 5
March/April 2013
10
12
FEATURES Crystal Ball: 10 The Insights into the Future of Houston By Suzanne R. Chauvin
Future of Electronic Courts 12 The in Harris County By Chris Daniel
to No (Paper) 16 Getting By the Honorable Mike Engelhart Technology in the Courtroom 20 Using By Chance a. McMillan
16
20
Court of Babel: The Multilingual 26 The Courtroom of the Future By The Honorable Josefina M. Rend贸n and Lingling Dai
Law Firm of the Future 34 The By Toby Brown Bar Foundation 37 Houston Recognizes Outstanding Efforts by Volunteers
The Houston Lawyer
26
34
37. 漏The Houston Bar Association, 2013. All rights reserved.
2
March/April 2013
thehoustonlawyer.com
contents Volume 50 Number 5
March/April 2013
34 25
40
departments Message 6 President’s Meeting Challenges of Tomorrow By Brent Benoit the Editor 8 From What I Want To Know is,
Where is My Hoverboard? By Keri D. Brown
Lawyers 25 Houston Who Made a Difference
Ewing Werlein, Sr. and Newell H. Blakely By Judge Mark Davidson
41
42
SPOTLIGHT 40 COMMITTEE The Lawyers Against Waste
Committee Makes Houston a Greener and Cleaner City By Polly Graham
in Professionalism 41 ATheProfile Hon. Marc C. Carter
Judge, 228th Criminal District Court
the Record 42 OffJammin’ with The Writ Kickers By Erika Anderson Trends 43 Legal New Civil Procedure Rules from
45
46
the Texas Supreme Court By Chance A. McMillan
Texas Supreme Court Reinforces the Distinction Between Contract and Tort Claims By Suzanne R. Chauvin the Bar 45 AtJudicial Investitures Reviews 46 Media Government Control of News: A
Constitutional Challenge
Reviewed Suzanne R. Chauvin
The Law of Superheroes The Houston Lawyer
Reviewed by Robert Painter
48 Litigation MarketPlace 49 Placement Service 4
March/April 2013 Harrison & Dagley LLP Campbell & Riggs, P.C. Chernosky Smith Ressling & Smith PLLC Christian Smith & Jewell, L.L Dinkins Kelly Lenox Lamb & Walker, L.L.P. Dobrowski, Larkin & Johnson LLP Dow Golub Remels & Beverly, LLP Doyle Restrepo Harvin & Robbins, L.L.P. Ebanks Horne Rota Moos Courtois, LLP Galloway Johnson Tompkins Burr & Smith Germer Gertz, L.L.P. Givens & Johnston PLLC Godwin Lewis, P.C. Gordon & Rees LLP Greer, Herz & Adams, L.L.P. Hagans Burdine Montgomery & Rustay, P.C. Harberg Huvard Jacobs Wadler Melamed, LLP Harrison, Bettis, Staff, McFarland & Weems, L.L.P.
Hartline Dacus Barger Dreyer LLP Hays McConn Rice & Pickering, P.C. Hicks Thomas LLP Hirsch & Westheimer, P.C. McGuireWoods Nathan Sommers Jacobs Ogden, Gibson, Broocks, Longoria & Hall, LLP Ogletree, Deakins, Nash, Smoak & Stewart, P.C. Pagel Davis & Hill PC Parrott Sims & McInnis, PLLC Reynolds, Frizzell, Black, Doyle, Allen & Oldham L.L.P.. Tucker Barnes Garcia & De La Garza, & Petry L.L.P. Firms of 25-49 Attorneys Adams & Reese LLP Akin Gump Strauss Hauer & Feld LLP Baker & McKenzie LLP Beck I Redden LLP Beirne, Maynard & Parsons, L.L.P. Coats I Rose Cokinos Bosien & Young Edison, McDowell & Hetherington LLP Gibbs & Bruns LLP Greenberg Traurig, LLP Hoover Slovacek LLP Jones Day Littler Mendelson, PC Olson & Olson LLP Seyfarth Shaw LLP Firms of 50-100 Attorneys Baker Hostetler LLP Chamberlain Hrdlicka White Williams & Aughtry S & B Engineers and Constructors, Ltd
president’s message
By Brent Benoit Locke Lord LLP
Meeting Challenges of Tomorrow
The Houston Lawyer
H
ouston is an exciting and challenging place to practice law. As Houston attorneys, we benefit from a vibrant legal economy, a buoyant general economy, a growing and diverse population, and growing important and cutting edge industries. All of this allows each of us to have the opportunity to practice law in a market that is the envy of lawyers around the country. One of the secrets of Houston’s success is a drive that causes us to never be satisfied with the status quo and to always push for improvements. This often puts Houston at the forefront of change, change often for the better and change that leads to exciting new opportunities. The Houston legal market is no exception. With new firms constantly entering our market, profound changes in the structures of existing firms, the development of new, cutting edge practice areas, and a host of other factors, it is certain that the legal market of tomorrow will be substantially different than the legal market of today. As we contemplate our everchanging city and legal economy, I thought it would be useful to our members to contemplate some of the major future issues confronting our attorneys
6
March/April 2013
thehoustonlawyer.com
neys) to interact with a variety of culthat will need to be addressed. By distures and languages. A changing legal cussing these issues and planning for market will require firms to adapt busithem, we can prepare and react quicker ness strategies and operations leading than other areas maintaining our envito firms that are mateous position of a prerially different than the mier place to practice I hope that firms of today. I hope law. you enjoy the discusIn this issue you will this issue sion of these topics and find a look at what the inspires you to find them beneficial as future has in store for you plan for the future. Houston and our proconsider planning I want to take this fession. I know you opportunity to also diswill not be surprised to for the exciting cuss a separate future see that the future use future of Houston issue and that is the of technology features continued success of prominently in our and our legal our Bar. I have had the discussion. The exploopportunity to meet sion of electronic commarket. with bar organizations munications through I hope that part from cities all over this Facebook, Twitter, text country and I can remessages, and other of that plan port that that our Bar means places additionis uniformly viewed as al discovery strains and will be to a leader among metro costs on the litigation become (or stay) bar organizations. process. In addition, We should all be clients and courts are deeply involved proud of this reputamoving towards papertion. But, if we are to less files that will reat the HBA. stay on top, we will quire lawyers and firms need to work hard and I want to mento be technologically savvy to succeed. tion three ways that you can contribute But the challenges of tomorrow exto the future success of the Bar. tend beyond technology. The growing First, the Bar must continue to grow diversity of Houston will require our its membership. If you practice at a Firm courts (as well as our firms and attor-
“
”
that is not a member of our 100 Club (having 100% of its attorneys enrolled as members), encourage the Firm to become a part of this important group. The best way for new firms to become integrated or plugged into our local Bar is to join the HBA and become active in it. By growing our membership, we develop the financial resources we need to continue and grow our services to our members. Second, we need you to join and become active in multiple sections. Our sections are one of the primary ways we are able to provide important professional assistance to our members. Sections provide important networking opportunities, keep you abreast of important developments in that practice area (a must for remaining competitive), and provide a plethora of convenient and affordable CLE options. An increase in section membership means we are serving more of our members professionally and ensures that our Bar will remain relevant to each of our members. Third, we need you to volunteer to work on a committee. If you cannot find an HBA committee that matches one of your interests, then you are just not interested in much. We have an incredible selection of committees that will allow you to give back a little to the community, whether through assisting Special Olympics, helping to build a house with Habitat for Humanity, tutoring at a local school, or any of a host of other service projects. Why is this important and useful to you? It is a great way to network, promote balance in your life, improve the perception of our profession by the community at large, and a great way to have fun. I hope that this issue inspires you to consider planning for the exciting future of Houston and our legal market. And I hope that part of that plan will be to become (or stay) deeply involved at the HBA. Help us as we work together to better our profession and our community for the future.
Defending Texans Since 1994 Former Assistant United States Attorney Former Assistant District Attorney Founding Member of the National College of DUI Defense of Counsel Williams Kherkher LLP Law Office of Ned Barnett
Gulf Freeway Office: 8441 Gulf Freeway, Suite 600 • Houston, Texas 77017
713-222-6767 • Board Certified in Criminal Law by the Texas Board of Legal Specialization thehoustonlawyer.com
March/April 2013
7
from the editor
By Keri D. Brown Baker Botts L.L.P.
Associate Editors
Julie Barry Attorney at Law
Angela L. Dixon Attorney at Law
Robert W. Painter Painter Law Firm PLLC
The Houston Lawyer
Don Rogers Harris County District Attorney’s Office
Jill Yaziji Yaziji Law Firm
8
March/April 2013
What I Want To Know is, Where is My Hoverboard? When HBA President Brent Benoit tasked The Houston Lawyer with putting together an issue focused on a futurist’s view of the legal profession, the Editorial Board was put to the test. An entire issue on the future of the legal profession? Regardless, our Board, with input from our fearless leader (that would be Brent) and the time and effort of our authors, has put together an interesting issue covering a variety of forward-looking topics.
I
n March, Slate introduced me to an article1 published in 1988 by the Los Angeles Times Magazine predicting what life would be like in 2013, a mere 25 years into the future.2 The article3 made a variety of predictions about the world (or at least Los Angeles), some of which came to fruition and some that are laughable. The house of the future was fully automated — coffee makers and ovens could be programmed to begin brewing and baking, respectively, at the appointed hour. The article missed the boat on the extent of technological advances in the news world, predicting that news personalized to the family would be printed off the home computer each morning. Close, but no cigar. The home of the future featured a $5,000 home robot named “Billy Rae” whose first job of the day was to wake up the family. Billy Rae later changes the sheets and does other chores around the house, including (poorly) attempting to make dinner. In the alternate-universe 2013, the patriarch of the family uses technology that does exist today as he teleconferences with colleagues in Tokyo. Later, he drives to a park-and-ride and hops onto a subway (using a Metro Rail card to gain access). Once at his office, he’s immediately 3-D videoconferencing with more associates (well, we aren’t all quite there yet). The matriarch telecommutes, using her “powerful home computer” — the only one in the house — and never has to leave the study (at least not until she drives to her satellite office). She also has the benefit of email to remain in thehoustonlawyer.com
contact with her colleagues. Lawyers always wishing to shave time off their morning preparations for work will be disappointed to know that we do not today have “Denturinse,” a mouthwash that is “much easier and more effective than toothbrushing.” The home of the alterna-2013 is outfitted with video intercoms. (At least we have Facetime on our iPhones and iPads today.) The child of the house carries to school a personal portable computer holding his educational history. Floppy disks (remember those?) are still in wide use, and encyclopedias are electronically stored on laser discs. While the family can access video on demand, it does require a call to the cable company to order a movie. For some reason, the house of the future has a “stark white kitchen.” Thank goodness we still have color options in our homes in the real 2013. The predictions about the technology of cars of the future was pretty close: Cars are operated with key-cards, adjust automatically to fit the driver, and feature GPS (although called an electronic map system). I daresay that the authors featured in this issue of The Houston Lawyer make no such bold predictions, but they do provide us with information on a variety of advances in our profession from a variety of perspectives. Toby Brown writes about the future of the law firm, looking at the firm of the future from five perspectives: human resources, facilities, financial management, marketing, and technology. Speaking of technology, Chance McMillan provides an overview of technology in the courtroom, exploring the ways our courtrooms have become more user-friendly of late. Judge Mike Engelhart, one of the most (if not the most) aggressive Harris County jurists in the quest to reduce our reliance on paper, writes about ways he has attempted to reduce paper in his courtroom and the benefits of eliminating the paper waste. Harris County District Clerk Chris Daniel explains the changes on the way Continued on page 49
BOARD OF DIRECTORS President
Secretary
Brent Benoit
Laura Gibson
President-Elect
Treasurer
David A. Chaumette
M. Carter Crow
First Vice President
Past President
Benny Agosto, Jr.
Denise Scofield
Second Vice President
Todd M. Frankfort
DIRECTORS (2011-2013)
Hon. David O. Fraga Neil D. Kelly
Alistair B. Dawson Brent C. Perry
Jennifer Hasley Daniella D. Landers
DIRECTORS (2012-2014) Warren W. Harris John K. Spiller
editorial staff Editor in Chief
Keri D. Brown Associate Editors
Julie Barry Robert W. Painter Jill Yaziji
Angela L. Dixon Don Rogers
Erika Anderson Suzanne Chauvin Jonathan C.C. Day Polly Graham Stephanie Harp Hon. Dan Hinde Chance McMillan Jeff Oldham Tamara Stiner Toomer
Editorial Board
Sharon D. Cammack Melissa Davis Sammy Ford IV John S. Gray Al Harrison Farrah Martinez Judy L. Ney Hon. Josefina Rendon
Managing Editor
Tara Shockley
HBA office staff Membership and Technology Services Director
Executive Director
Kay Sim Administrative Assistant
Ron Riojas
Ashley G. Steininger
Membership Assistant
Administrative Assistant
Bonnie Simmons
Ariana Ochoa
Receptionist/Resource Secretary
Committees & Events Director
Lucia Valdez
Claire Nelson
Director of Education
Lucy Fisher Cain Continuing Legal Education Assistant
Amelia Burt
Committee & Events Assistant
Rocio Rubio
Communications Director
Communications/ Web Designer
Tara Shockley
Brooke Benefield 2013
9
The Crystal Ball: Insights into the Future of Houston
By Suzanne R. Chauvin
T
he Center for Houston’s Future — The Region’s Think Tank — works to solve the eight-county Houston region’s most difficult challenges by bringing together leaders from throughout the community and business, providing research and identifying solution strategies. For 31 years, Rice University has conducted extensive survey research into the region’s economy, population, and changing attitudes, making the Kinder Houston Area Survey the nation’s longest running study of its kind. The Kinder Institute for Urban Research was launched by Rice University under the direction of sociology professors Stephen L. Klineberg and Michael O. Emerson. The future of Houston’s legal community is tied intrinsically to the demographics and other information presented in this research. The Center for Houston’s Future and the Kinder Institute for Urban Research at Rice University shared these facts and trends of the Houston region. • The petrochemical industry gave Houston a strong competitive advantage when the nation’s economy was focused on natural resources. Increasingly, however, Houston’s economy is becoming global, and is more high-technology and knowledge-based.1 • The city of Houston now has 2.1 million people – about the same as the population of Manhattan.2 • The Center for Houston’s Future has developed two scenarios centered on how Houston will look in 2040; one scenario is based on explosive population growth, and the other is based on more steady growth, with a focus on quality of life. In discussions surrounding these scenarios, Houston leaders focus on how the region
should plan now for either scenario.3 • Attitudes about the urban lifestyle are changing. In 2012, more than half the people surveyed in Harris County said they would choose a “smaller home in a more urbanized area, within walking distance of shops and workplaces” over “a single family home with a big yard,” where they would need to drive more.4 • Over two-thirds of area residents believe that Houston’s ethnic diversity will eventually become “a source of great strength for the city.”5 • Between 2000 and 2010, the Houston metropolitan area added more people than any other metropolitan area in the United States.6 • Houston is the most culturally diverse large metropolitan area of the country, surpassing New York, Los Angeles, Chicago, Miami and San Francisco.7 • As diverse as Houston is, the cities of Missouri City and Pearland are the region’s most racially and ethnically diverse cities.8 • Harris County is a majority-minority county. About 33 percent of the population is Anglo, 18 percent is African-American, 40.8 percent is Hispanic and 7.7 percent is Asian or other.9 • Most seniors in America are Anglos, as are the 76 million Baby Boomers, aged 47 to 65. During the next 30 years, the number of Americans over the age of 65 will double.10 • A higher percentage of younger Americans are non-Anglo, and are less privileged in terms of income,
education, health status and opportunities.11 • In Harris County, over half the population over age 65 is Anglo. Among those aged 0 to 17, over half the population is Hispanic. For ages 18 to 64, over half the population is minority.12 Suzanne R. Chauvin is a partner in the Houston office of Strong Pipkin Bissell & Ledyard, L.L.P. Her practice emphasizes complex commercial litigation, including contract disputes and business torts, toxic tort defense, products liability and industrial accidents, commercial real estate disputes, and labor and employment disputes. She is a member of The Houston Lawyer Editorial Board. Endnotes Rice University Kinder Institute for Urban Research and Klineberg, Stephen L. (2012). The Changing Face of Houston: Tracking the Economic and Demographic Transformations Through 31 Years of Surveys, available at Survey/Complete%20Presentation%20(2012).pdf. 2. Id. 3. w w.centerforhoustonsfuture.org /cf hf. cfm?a=cms,c,261,3,29 4.. 5. Id. 6. Rice University Kinder Institute for Urban Research and the Hobby Center for the Study of Texas, and Emerson, Michael O., Bratter, Jennifer, Howell, Junia, Jeanty, P. Wilner, Cline, Mike (2011). Houston Region Grows More Racially/Ethnically Diverse, With Small Declines in Segregation. A Joint Report Analyzing Census Data from 1990, 2000, and 2010, available at http:// kinder.rice.edu/uploadedFiles/Urban_Research_ Center/Media/Houston%20Region%20Grows%20 More%20Ethnically%20Diverse%202-13.pdf. 7. Id. 8. Id. 9.. 10. Rice University Kinder Institute for Urban Research and Klineberg, Stephen L. (2012). The Changing Face of Houston: Tracking the Economic and Demographic Transformations Through 31 Years of Surveys, available at Survey/Complete%20Presentation%20(2012).pdf. 11. Id. 12. Id. 1.
thehoustonlawyer.com
March/April 2013
11
By Chris Daniel
The Future of Electronic Courts in Harris County T
he goal of the Harris County District Clerk’s Office is to streamline the processes of the judicial system and create transparency. Technology plays a key role in helping us achieve an improved courthouse that better serves the citizens. Electronic dockets, e-Filing, and other features have greatly improved and changed the way the bar
practices law in Harris County. Former Harris County district clerks and their staff members have made numerous advances that resulted in quicker docket movement, faster case-related communications and fewer pieces of paper filling file cabinets. While we still have tons and tons of paper and hundreds of file cabinets, the wave of the future in Harris County is electronic courts. By Order of the Supreme Court The Texas Supreme Court has ordered electronic filing in all civil, family and probate cases by attorneys in appellate courts, district courts, statutory county courts, constitutional county courts and statutory probate courts. On January 1, 2014, Harris County is scheduled to participate in the mandatory statewide electronic filing system in all of the previous courts mentioned. Beginning next year, all attorneys must file electronically through the Texas State Portal, called TexFile. All pro se litigants may file either electronically or in person by paper at the courthouse. While this order will not be without its short-term hurdles, in the long run it will greatly improve the legal community’s efficiency at work and generate enormous tax payer savings for years to come. The Harris County judicial community will work tirelessly together to meet this steadily approaching deadline. The Supreme Court order disallows any other filing alternative. The only electronic filing system deemed permissible will be TexFile. As a result of the order, many of our local improvements like FREEFax will no longer be available to the bar in these courts. While these improvements have made the practice of law more efficient and cost-effective, a truly paperless system—much like the current federal PACER system—is the best option for future courts. The ability to file anywhere in the world allows an advocate to respond to his or her client’s needs immediately and without delay. This concept is not only good for the client;
it also saves money for the taxpayer. The Supreme Court has yet to give district clerks the specifics of how the new e-filing system will work. As these requirements are communicated to my office, we will do our best to inform the bar on how the new system will work and how it will affect your practice. Additionally, there are many subfeatures that tie into such a paperless system. Technological Advances Many features, such as electronic citations, electronic subpoenas, electronic signatures for judges and certain court parties/personnel and courtroom kiosks are being piloted and programmed as we speak. For decades, citations sent to the Constable’s Office for service first had to be sent by paper to Precinct 1, since that precinct is both downtown and the closest in proximity to the District Clerk’s Office. Then, Constable Precinct 1’s Office was responsible for sorting and delivering the citations to the other constables by agreement, leading to delays and subjecting the process to human error. Now, through electronic citations and service of process, the paper will not only be eliminated, but the process will become streamlined, leading to faster service and freeing up resources to more adequately address crime and other duties. Electronic subpoenas (“e-Subpoena”) are part of a laundry list of technological items to improve the criminal court process. We continue to survey the defense bar and the District Attorney’s Office to find out what is needed and wanted, and this was a major item that both sides requested. This is a multiagency project involving the Sheriff’s Office, the District Attorney’s office and the District Clerk’s office that feeds into the larger Justice Web (“JWEB”) project to revamp the Justice Information Management System (“JIMS”). E-subpoenas will allow for expedited trial settings and speed up justice as a whole.
New County JIMS Project JIMS, once a state-of-the-art case management system, is now aged beyond its intended use and is set for a multi-stage overhaul. An enormous and important undertaking, creating the new case management system is a joint countywide effort that will provide a vastly improved product for the needs of Harris County. Beginning soon, the first stage is to move the entire system off a mainframe and onto the web. This project phase is estimated to take about three years. Once the system has been upgraded to a web-based system (JWEB), each department will have the opportunity to review and provide input to make department specific JIMS screens userfriendly. Ultimately, once JIMS has been converted off the mainframe, end-users should begin to see dramatic improvements to departmental JIMS screens as the county tackles them. Courtroom Kiosks Simultaneous to the JWEB project is the courtroom kiosks initiative. As part of our goal to streamline and increase the data that attorneys can access in the courtroom, criminal courts will soon have kiosks that allow them to see their docket (mug shots and criminal history for each client included), the judge’s daily docket and any motions, documents, etc. that have been filed with the case. The screens will be touch screen, and the information will be user-friendly and easily searchable. Judges will be able to make public and private notes. Furthermore, staff and attorneys will be able to sign documents electronically. Attorneys will not only be able to sign the same documents (users sign in by SPN), but they will be better able to manage their practices. Similar kiosks will be custom designed for juvenile, family and civil courts based on common practices and needs. Future piloting of kiosks for public libraries and law libraries will cut down the paper submitted by pro se or indigent filers. thehoustonlawyer.com
March/April 2013
13
Electronic Signatures Another upcoming initiative under consideration is the use of electronic signatures. This would allow judges to bypass signing papers and permit the judge to digitally sign any type of document filed electronically. This will result in the rapid reduction of paper at the courthouse. No longer will the clerk have to print the electronically filed document for the judge to sign and then rescan the signed document back into the system. Also, judges may be able to expedite rulings if they can sign digitally while reviewing pleadings online without waiting for the appropriate paper version to be printed and presented for signature. The goal is to allow judges, attorneys, court staff and the viewing public to securely and quickly sign documents at their computers. As Local Rules are updated, this might also allow judges to sign orders and documents remotely from anywhere in the world. Ultimately, the point is to speed up the process while drastically
14
March/April 2013
thehoustonlawyer.com
cutting down on the volume of paper received into the clerk’s office. What the Future Holds These future components of electronic courts could then be incorporated into a mobile device or tablet-friendly technology and even be synced to future applications (“apps�) for mobile devices. In addition, the public would benefit from various features such as background checks, document searches and rescheduling jury service. Each feature could be incorporated into the app as a tab or button for easy access and use. As we continue to move forward with electronic courts, the process created by Harris County will set the standard across the state just as JIMS once revolutionized the statewide justice system and legal practice The goal of achieving truly electronic courts will not come without its hurdles. As a joint effort of all county stakeholders, there are many concerns that need to be addressed. However, with sound
leadership from the Justice Executive Board and with cooperation throughout, there will be few opportunities for failure, and instead many chances to shine. We live in exciting times for our legal community. As we embrace change, your practice and your clients will be better served with the technological advances knocking at our doorstep. My job is to shepherd the changes and make the advances manageable for you. When each stage is complete, we will have the usual public classes and CLE for all users of the website/e-courts to teach the community how to use the features. We welcome comments and suggestions to help us facilitate the process along the way to provide the best electronic filing system possible. Chris Daniel was elected Harris County District Clerk in 2010. He graduated from the University of Texas at Austin with a degree in mechanical engineering and from South Texas College of Law in 2010.
thehoustonlawyer.com
March/April 2013
15
Getting to No (Paper)
By the Honorable Mike Engelhart
“G
etting to Yes” is a famous 30 year-old guide to successful negotiations and sales. From my experience as an environmentally and fiscally conscious judge, getting to no (paper) in the courtroom will continue to take a heck of a sales job on my part. In April 2009, I became the first judge in the history of Harris County to require electronic filing in all new cases filed in my Court. The District Clerk and I subsequently teamed up to create and implement FREEfax, a free local portal that approximates e-filing in many ways, yet still adheres to the 1991 Harris County local fax-filing rules. Today, many Harris County district courts require electronic filing, and all civil district courts allow FREEfax filing. Additionally, both of Houston’s courts of appeals mandated electronic filing for briefs in most civil cases, as well as for clerk’s and reporter’s records. In my opinion, these efforts helped fend off more catastrophic layoffs in the District Clerk’s office, save taxpayer funds, eliminate countless car trips to the courthouse and preserve exactly1 four million and seven trees. In December 2011, I was the only Texas district judge invited to speak before the Supreme Court about the future of
Texas e-filing. The Supreme Court has now ordered that after December 31, 2013, Harris County District and County Court filers, other than pro se parties, must electronically file all documents through a newly implemented statewide electronic filing system. Harris County’s push for electronic filing, and FREEfax’s impact on the total volume of paperless filing, in my view, accelerated the Supreme Court’s mandatory e-filing timetable by at least a few years. We have come a long way in the last four years toward eliminating paper usage at the Harris County courthouse. But even though I ask parties to give me their proposed jury charges on a flash drive, my bailiff copies the jury’s read-along charges double-sided, and I prepare for hearings by reading filings only on my computer screen, I still find piles of paper all over my chambers. So, how do we get rid of those stubborn stacks once and for all? And do we want to? What would that look like?
“We have come a long way in the last
four years
toward eliminating paper usage at
the Harris County courthouse.
...So how do we get rid of those
stubborn stacks once and
What’s in the Stacks? It’s best to attack the paper stacks by identifying them. The piles of pages populating my chambers are predominantly: (1) manila envelopes stuffed with documents filed under seal for in camera inspection; (2) case law, exhibits, and deposition transcripts handed to me during hearings to review later; (3) court orders my staff or I have printed and signed and put in my outbox for my clerk to enter into our case management system; and (4)
”
for all?
filled notepads with notes taken during hearings or while reading motions on my computer. Each of these categories presents different obstacles for elimination. Sealed Documents Repeat customers in the 151st have learned that I do not want stacks of supposedly sensitive papers to review in camera. Not everyone has gotten the message, though, and weekly, I find a ream’s worth of paper in a large envelope in my inbox, taped up like Arian Foster’s bad ankle, and marked FILED UNDER SEAL all over. Solution: Put these on a flash drive or DVD disc. If you are concerned about security, give the flash drive a password known only to your side and the Court. Exhibits, Depositions, and Case Law It is certainly helpful to have exhibits, case law, and deposition excerpts at the hearing, but if my goal is to eliminate paper, what do we do to get around this sticky wicket? The answer to this one, as with most things lawyerly, is preparation. I know it is a pain, and I cannot practically refuse to look at a paper exhibit, a deposition or a case cite, but you can help with my target-zero-paper mission with a little foresight. First, if you insist that I have my own set, bring these items to the Court electronically in PDF format on a single flash drive. These should be the only things on the drive, and should have descriptive file names. Make sure it is virus free! Give it to me at the start of the hearing and I can view it on my laptop as we proceed. Alternatively, learn how to quickly hook up your computer to the court’s A/V system and put these documents on my A/V screen. Or, as I have seen lately, put them on your iPad or other tablet and hand them to me as we proceed. Any of these methods will aid our cause. And, as a bonus, the more thought you put into how you will use these items at the hearing, the more prepared you will be for the hearing in general. thehoustonlawyer.com
March/April 2013
17
Court Orders When I took office in 2009, my clerk would regularly present me with a yellow folder full of paper comprised of “agreed” motions and orders including continuances and dismissals. I would sign the orders, if appropriate, and then recycle the paper motions. My clerk and I soon undertook to eliminate this waste. She took advantage of the tools in our DEEDS case management system to place these motions and orders electronically in a separate section of my home page. Now I can read these electronically and print only those orders to be signed. Yet, if I want to get to 100 percent paperless, it is still problematic that I have to print the orders to sign them, and then have the clerk scan them into the file. One important (hopefully not-toodistant) future innovation for Harris County judges would be a change in the law to allow us to sign orders electronically. Presently, Harris County’s local district court e-filing Rule 6.1(a) (available at) requires that
judges print and sign paper orders only.2 Thus, the elimination of these paper documents would require both a technology upgrade for judges’ computers, and a change to our local rules (to be approved by the Supreme Court of Texas). There are dozens of vendors advertising online, with technology ranging from signing with a stylus on your smartphone, to signature pads on your desktop, like at your pharmacist’s counter. Separately from our local rule change, perhaps the Supreme Court’s new e-filing rules that will accompany its recent statewide efiling mandate will address this issue. Legal Pads I learn best by writing what I hear. So, I take a lot of notes in hearings and during voir dire. While I often do this in a Word document on the computer on my bench, sometimes this method is not practical. For example, when I am reading lengthy motions and need to keep track of the parties and issues, I will take handwritten notes. As a result, I usually have sev-
alternative dispute resolution MEDIATION, ARBITRATION, SPECIAL JUDGE
(Chap.151, CPRC)
Dan Downey
• Former District Judge • Board Certified Civil Trial Law — Texas Board of Legal Specialization • Adjunct Professor of Law
ADR That Preserves Your Right of Appeal —Chap. 151, CPRC A faster, cheaper and more predictable ADR alternative to arbitration. Read more at dandowney.com (Publications)
Details at:
dandowney.com • 713.907.9700 1-800.792.4444 • 5009 Caroline Suite 100B, Houston, TX 77004 18
March/April 2013
thehoustonlawyer.com
eral filled or half-filled notepads sitting around my chambers at any one time. I suspect there are many others out there like me with legal pads scattered around their offices. How to eradicate these? Well, the most elegant solution seems to be something called a digital notepad. A quick Google search demonstrates that several are available for under $150. An iPad app that uses a stylus like uPad or Noteshelf is a potential solution as well. But the digital notepad is specifically designed to mimic a paper notepad, and, using OCR software, can save your handwritten notes in editable format on your computer. One brand even claims to be perfect for southpaws like me. Are We There Yet? How’s the View? So, what if we are successful in this endeavor? If we imagine a 2014 complete with mandatory e-filing in all County district courts, and court chambers with no stacks of sealed paper documents, no paper exhibits, case law, or deposition excerpts, no printed and to-be-scanned orders, and no legal pads gathering dust, how does that look? To me it looks like even more taxpayer savings, more green leafy trees in old growth forests, cleaner air, and a really neat and tidy desk. It also looks like a highly-coordinated file management system that efficiently and reliably serves litigants in Harris County. It would look like progress and a furtherance of justice. I can’t wait! Next up, let’s see about getting solar panels and windmills on the courthouse roof. Judge Mike Engelhart was elected as Judge of the 151st Civil District Court in Harris County, Texas in November 2008. He has taken the lead on many e-filing initiatives since his election. Judge Engelhart is board certified in personal injury trial law. Endnotes 1. 2.
Approximately. On February 12, 2013, the Board of District Judges of Harris County approved a local rule change that would allow for electronic signatures on orders. The change still must ultimately be approved by the Texas Supreme Court.
Ambushed?
Defend yourself. When nature doesn’t give you the protection you need, make sure you have the best liability insurance available. Texas Lawyers’ Insurance Exchange offers affordable legal malpractice protection to over 5,000 Texas lawyers and judges. TLIE has been a consistent and reliable source of liability coverage for over 31 years. After you’ve been ambushed and a claim has been filed is not the time to wonder if you have dependable coverage. Make sure you do.
512.480.9074 / 1.800.252.9332 INFO@TLIE.ORG /
Estate, Trust and Guardianship Litigation
innovative adj. Creating value out of new ideas, new services or new ways of doing things.
Ford+Bergner LLP has always strived to find innovative solutions to meet our clients’ estate, trust, and guardianship litigation needs. The recent addition of Richard F. Bergner represents one of those innovative solutions as he offers significant experience in commercial and oil & gas matters. As a result, Ford+Bergner LLP continues to offer its clients aggressive representation in estate, trust and guardianship litigation matters while also offering the expertise of a seasoned lawyer to advise those clients on the commercial and oil & gas issues that may be a driving concern in the litigation.
Ford+Bergner LLP
Ford Bergner LLP Don D. Ford III | Managing Partner Board Certified in Estate Planning and Probate Richard F. Bergner | Partner
5151 San Felipe • Suite 1950 Houston, TX 77056 T: 713.260.3926 • F: 713.260.3903
901 Main St. • Suite 6300 Dallas, TX 75202 T: 214.389.0887 • F: 214.389.0888
Ford+Bergner LLP Ford+Bergner Houston Lawyer Ad 2.indd 1
thehoustonlawyer.com
3/11/13 March/April 20135:02 PM 19
Using Technology in the Courtroom
By Chance a. McMillan
T
echnology affects nearly every aspect of our lives. If it is not some new product being introduced, it is an old product being improved. In the past 20 years, we have seen computers become embedded into our daily lives, experienced cell phones become personal assistants, and witnessed globalization open doors to markets that were inaccessible before. We are truly living in interesting times. So, how has technology affected Harris County attorneys? Specifically, how has technology affected the practice and presentation in the courtroom? This article focuses on how technology is being utilized in Harris County courtrooms and gives insight into how technology may assist us in the future. New Courthouses: State-of-the-Art Tools for Persuasion and Efficiency I was not around to practice law in the old civil courthouse, which has been renovated and now houses the First and Fourteenth Courts of Appeals. Before it was replaced by a modern, tech-friendly facility, the old courthouse was overcrowded and its technological capabilities were limited to blow up presentations, overhead projectors, chalk boards, and dry erase boards. Practicing in the new civil and criminal courts has changed dramatically. The civil courthouse is 660,000 square feet and has Wi-Fi. It has 39 courtrooms equipped with state-of-the-art technology to ensure the administration of justice runs smoothly. Each courtroom has the technological capability to display documents and videos in a number of locations throughout the courtroom. Jurors can view documents or videos collectively on ceiling-mounted projectors or individually on screens provided in the jury box. Screens are also located at counsel tables, the witness stand, and at the bench.
Each judge has a computer equipped Visuals Dominate the Court Scene with Microsoft Office and connected to Research has shown that as much as 80 the Harris County database. This allows percent of all of our learning takes place judges to easily navigate back and forth through our eyes.1 Learning is most efthrough documents when hearing and fectively accomplished through visual deciding motions. For example, in a sumcomprehension. This is not shocking, mary judgment hearing, a judge can easconsidering the multimedia age we live ily read the parties’ moin. We spend countless tions, listen to the oral hours on the internet ...how has technology arguments, and pull up surfing, working, and the electronically filed communicating. We affected Harris County exhibits before comare constantly exposed attorneys? Specifically, ing to a decision. This to various visual forms enables each judge to of advertising throughhow has technology work efficiently and out the day. This kind leads to more litigants of constant communiaffected the practice being heard daily. cation and media exLikewise, parties are posure has created sigand presentation no longer limited to nificant challenges for in the courtroom? simply arguing at heartrial attorneys to keep ings. Now, attorneys jurors’ attention. regularly create PowerPoint presentaOne leading trial consultant suggests tions or use the court’s video capabilithat attorneys should think like film ties to enhance their presentations. Pardirectors when preparing for trial.2 It ties can also prepare extremely detailed should come as no surprise, given the demonstrative exhibits to educate the technological opportunities the new judge and jury. Thus, technology has becourthouse affords, that attorneys are come an aid in the art of persuasion. utilizing visual aids more than ever Skype has also made its way into the when communicating to jurors. The courtroom. Skype is a software applicamost common aid used is Microsoft tion that allows users to make voice calls PowerPoint; however, new trial presenover the internet while presenting imtation software provides even more inages of the users on a computer screen. teractive opportunities. This allows the type of face-to-face comRecently, I watched a trial in which a munications that regularly takes place surveillance video was the primary foin court. It is already being used in dicus for both parties, and both parties vorce actions involving military personused trial presentation software to presnel stationed overseas. This technology, ent their cases. The parties were able to or similar technology, could be used by stop and start the video at specific points courts to reduce travel expenses or other almost instantaneously without mislitigation costs. haps. They also used video depositions As the use of technology grows, lawwith subtitles; the attorneys were able yers who do not keep up with the new to highlight portions of deposition excourtroom technology may find themhibits and show them to the jury while selves at a disadvantage. Unfortunately, questioning the witnesses. It was very the courtrooms do not have neutral effective and appeared to keep the jury’s technicians to assist attorneys in trial attention. or court procedures; therefore, prudent Trial presentation software allows atlawyers preparing for trial should contorneys to better utilize exhibits, video sider going to the courthouse in advance evidence, and deposition testimony. Atto get a basic understanding of the courttorneys can highlight and write on releroom’s amenities. vant sections of documents, and can ma-
“
”
thehoustonlawyer.com
March/April 2013
21
nipulate exhibits to focus on important points. Although the software has been around for some time, new versions are constantly being introduced. With practice and patience, an attorney can learn to operate this software smoothly to more effectively present cases. Attorneys should also consider hiring a trial consultant. Trial consultants offer a variety of services applying technology to assist in presentations, including document management, medical il-
lustrations, 3D printing, medical film, trial exhibits, court reporting, and video services, to name just a few. It is not uncommon at trial to see video technicians assisting both sides in presenting their cases. Obviously, not every case requires a trial consultant or trial presentation software. In some cases, the potential benefit will be greatly outweighed by the cost. In other cases, these services provide can provide a cost-effective way of
Season Sponsor
may 14–26 14–26 may
hobby center center hobby
call today for best seats! tickets start at only $24!
tuts.com 713.558.tuts PG
22
March/April 2013
thehoustonlawyer.com
presenting complex facts and ideas. An attorney should consider the complexity of the case and its specific issues when deciding whether to use a trial consultant or trial presentation software. Products to Consider A variety of trial presentation software is available today. Some to consider are: Trial Director 6 (by InData), Sanction 3 (by LexisNexis), and Visionary 8 (by Visionary Legal Technologies).3 Visionary 8 offers a “Pro” version and a “Free” version. The “Free” version is meant for small cases with five or fewer depositions, and may be perfect for small firm practice or individual practitioners. Most of these software programs are available for Microsoft and iPad.. Regardless of firm size, attorneys should consider Skype. To get started, all that is required is to download the software and purchase a video camera. Skype is inexpensive and can be used for video conferencing. Skype could be the next big thing to cut down travel expenses, benefiting both clients and attorneys. It is not too farfetched to believe
Trust
your transactions to the only merchant account recommended by over 60 bar associations!
Get Paid Increase Business Control Cash Flow Reduce Collections Lower Fees up to 25%
LawPay.com credit card pro cessing
866.376.0950
Affiniscape Merchant Solutions is a registered ISO/MSP of Harris, N.A., Chicago, IL
thehoustonlawyer.com
March/April 2013
23
Skype or something similar will be used in the very near future for court proceedings. Access to Additional Information The HBA’s Law Practice Management Section offers CLE programs on technology and networking opportunities. Dues are $20 per year. The HBA/CLE Committee offers a number of technology-related seminars throughout the year, and many of them are free to HBA members (). The State Bar of Texas Computer and Technology Section is an excellent resource to find technology applications for lawyers. It only costs $25 to join, and members are continually updated on new technology available to attorneys. The section also offers apps for dozens of Texas and federal codes, rules, and statutes. You can download the apps to iPhone/iPad/iPad Touch and Droids. The American Bar Association (ABA) also keeps its members technologically
educated. Each year, the ABA holds its annual Tech Show to assist lawyers in applying technology in their practices. Last year’s conference included such topics as: Effective E-Filing in Small Cases, Courtroom Technology-Evidence and Persuasion, Productive PDF Tips and Tricks Every Lawyer Should Know, How to Stay Safe in the Cloud, Managing the Information Tsunami, Electronic Discovery for Small Cases, Supercharging Outlook—MS Outlook Hidden Features and Add-Ons, and Beyond the FAQ: Building a ClientFriendly Website. It is beyond the scope of this article to discuss all the new products that are available and that may be employed by the attorney in practice, but every attorney should consider incorporating technology to improve his or her practice. The question is how much. To answer this question, an attorney should consider the firm size, client expectation, and the amount of money available to spend on new or additional technology.
Conclusion Technology is a valuable asset to any profession, and the legal profession is no different. We live in a fast paced and hyper-interactive world. Every attorney can benefit by having a basic understanding of what technology is available and the options at the courthouse. Chance A. McMillan is an associate with Thomas N. Thurlow & Associates, where he dedicates his practice to personal injury and civil litigation. He is a member of The Houston Lawyer Editorial Board. The author would like to thank the Honorable Roy L. Moore, of the 245th Judicial District Court and Kris M. Allfrey, a trial consultant and president of The Legal Wizards, Inc. Endnotes 1.
2.
3.
Robert R. Farrald and Richard G. Schamer, Journal of Learning Disabilities 6 (1973). William S. Bailey and Robert W. Bailey, Show the Story, (Trial Guides, LLC, 1st ed., 2011). Trial Director 6 retails at $695; Sanction 3 retails at $895; Visionary 8 retails at $495.
YA L E PERFECT FOR HOME OR OFFICE A beautiful historic home in the heart of the Heights. This gated and private home is situated on a 16,500 sq. ft. lot and features a main house, guest house and two story garage apartment. The main home boasts 12 ft ceilings, 7 ft windows, heart pine floors and numerous other historic interior features. Custom pool and spa make this an unbeatable home!
2-5/2 • $930s I want to be YOUR Realtor!
Ashton M Artini Circle of Excellence, #1 in Buyers & Listings at MTP - 2012
832.878.7686
amartini@marthaturner.com
24
March/April 2013
thehoustonlawyer.com
ASHTON MARTINI is selling...
1233
Houston Lawyers Who Made a Difference
By Hon. Mark Davidson
F
Ewing Werlein, Sr. and Newell H. Blakely
or most people, the biggest difference in our lives, other than our parents, was made by the teachers who educated us in our formative years. Few legacies last longer, since their work carries forward to the next generation. For members of the bar, our law professors brought us from the world of our undergraduate majors to people who could “think like a lawyer.” Every one of our law professors made a difference in our system of justice. Two law professors in our community stand out for their life’s work: Ewing Werlein, Sr. and Newell H. Blakely. Ewing Werlein, Sr. co-founded the Houston Law School in 1923 as a night law school. His quest was to give young men who could not afford to go to school on a full time basis an opportunity to attend classes at night and become lawyers. Among his more famous students were Roy Hofheinz and Searcy Bracewell, together with some of the great judges of the 1950s and 1960s, including Ben Moorhead, John Compton and Bill Hatten. Werlein was also a civil leader. In 1942, as a member of the Houston School Board, he led the effort to equalize the salaries of White and African American teachers. In 1958, he became the first judge of the 157th District Court. After a month of service, he became a member of the First Court of Civil Appeals, where he acquired a reputation as a great judicial scholar and writer. In that role, he was a mentor to young justices of the court during a time in which the number of appellate justices in Houston doubled. Newell Blakely was one of the founding professors of the University of Houston Law School. Three generations of law students learned evidence from him. His austere classroom style and dominating intellect guaranteed both preparation and understanding of the subject by his students. He
also served as dean of the law school during a time in which it grew in size and stature. In the early 1970s, Blakely helped draft what became the 1974 Texas Penal Code. In the 1980s, he helped write the Texas Rules of Civil Evidence that were promulgated by the Texas Supreme Court and the Texas Rules of Crimi- Ewing Werlein, Sr. Newell Blakely nal Evidence promulgated by the Texas Court of Criminal Appeals. the law, the legal profession and those served All lawyers who make a difference for by the law. their clients owe their law professors a debt of thanks. Werlein and Blakely were two of the best. To their hundreds of students, and to those students’ thousands of clients, these two legendary educators made a difference that will endure for years to come. The Hon. Mark Davidson is an MDL judge and judge (retired) of the 11th District Court. His column for The Houston Lawyer focuses on Houston attorneys who have had significant impact on thehoustonlawyer.com
March/April 2013
25
The Court of Babel: The Multilingual Courtroom of the Future Therefore is the name of it called Babel; because the Lord did there confound the language of all the earth – (Genesis 11:9)
By the Hon. Josefina M. Rendón and Lingling Dai Imagine a courtroom in Houston, Texas circa 2050. “All rise,” the court bailiff announces in a heavy accent to call the court in session. Lu E. Peng remains seated with a confused look on her face as the entire courtroom stands up almost in unison. Lu is a Chinese national who speaks only Mandarin, although she studied English in college. A savvy business woman who owns a manufacturing company, she also had three business partners from India, Africa, and Mexico, all of whom marketed Lu’s products to the SouthAsian, African, and Latino communities in Houston. Now she sits in a Texas courtroom being sued by people she once worked with side by side. The court informed Lu that she had the right to appear by a remote digital channel, and all of her co-defendants chose to do so. Upon advice of her American counsel, Lu appeared in person—the old-fashioned way—even though it is the technologicallyadvanced 2050. There is a lot of whispering in different languages, and her co-defendants’ faces pop up on the court’s screen in dialogue bubbles. She wonders, “How can we all communicate at the same time?”
T
his scene is not far from current reality. Already, Houston’s courtrooms frequently have trials where languages other than English are spoken. In most instances, the foreign language is Spanish; however, other languages, some very dissimilar to English, are also spoken with increasing frequency. Under those circumstances, what one hears or understands may be significantly different from the original statements. This reality presents the need for accurate interpretation to bridge the gap between English and the target foreign language, and vice versa. Foreign accents already pose challenges for court reporters to transcribe an accurate record, so imagine what a foreign language without adequate interpretation could do to the accuracy of court proceedings and records,
and hence the exercise of justice in our court system. An International, Multilingual City In Houston, an international city with approximately 700 foreign-owned firms,1 many different languages are spoken daily. About 100 countries have either business or government offices here. Numerous large non-U.S. based corporations have a presence in Houston.2 In turn, around 400 Houston-area companies have offices in at least 129 countries.3 Altogether, more than 3,300 area firms, foreign government offices, and non-profit organizations in Houston are involved in international business.4 The Port of Houston is ranked first nationally in international cargo tonnage, and second in total tonnage handled.5 Jeff Moseley, president and chief executive of the Greater Houston Partnership, notes that “[m]uch of our economic growth is due to our international trade ties.”6 Houston’s total international trade is $167.7 billion per year. However, Houston’s international ties are not only economic.7 A surprisingly high percentage of Houstonians are foreign-born. In 2012, about one in seven Texans was born in another country,8 compared to more than one in five in the Houston Metropolitan Statistical Area9 and even higher in Houston proper. According to the U.S. census, 28.4 percent of Houstonians are foreign-born10 and 45.8 percent live in homes where a foreign language is spoken.11 In fact, more than 90 languages are spoken throughout the Houston area.12 Even one of Houston’s law schools recently boasted of its international status13 and the multilingual composition of its 2015 graduating class (26 languages).14 Foreign Parties Meet American Justice?.15 Houston’s international character, and the resulting presence of many foreign parties with Limited English Proficiency (LEP), poses unique problems for the courts. The legal-interpreter literature is replete with examples of cases in which the lack of appropriate interpretation for persons with LEP led to unjust outcomes and the denial of access to justice.16 It has been argued that, when an LEP party lacks adequate interpretation during a trial, he/she is in effect constructively absent from the trial due to the fact finder’s different understanding of what transpired or was said. This problem will be magnified with the increased incidence of multiple languages in the courtroom. In response, our federal and state legal institutions have shown a heightened awareness of the needs of persons with LEP. Since 1978, federal legislation has provided for the appointment of interpreters for cases initiated by the government in federal courts.17 Similarly, the Civil Rights Act includes interpreter services as part of its protection against national origin discrimination.18 The Act’s application to federally funded programs arguably applies to state court systems receiving federal financial assistance.19 In 2000, Executive Order 13166 mandated that federal agencies and recipients of federal financial assistance provide meaningful judicial access for LEP persons.20 In 2010, the Office of the U.S. Attorney General spelled out Language Access Obligations under Executive Order 13166 that require court personnel to possess working knowledge in identifying and coordinating the needs of persons with LEP.21 Nationally, organizations such as The National Center for State Courts and the American Bar Association have promulgated standards for courts to follow regarding LEP parties.22 Besides reinforcing the principle of equal access to courts thehoustonlawyer.com
March/April 2013
27
as a fundamental right, these organizaCurrent Trends tions have recommended ways to: (1) Courts nowadays are still heavily reliant identify those in need of interpretation; on interpreters’ in-person appearances (2) train interpreters; (3) train court perin cases where a party or party’s witness sonnel, including judges; and (4) license has LEP. As such, much of the current or certify interpreters. state of the law addresses human interIn Texas, the Court of Criminal Appreters, their compensation, and their peals has also long recognized crimiqualifications. However, technology-asnal LEP defendants’ sisted interpretation right to an interprethas steadily gained In Texas, the Court er as a right protectrecognition. These of Criminal Appeals has ed by both the U.S. mostly involve realso long recognized and Texas constitumote telephonic contions.23 Since at least ferencing29 or video criminal LEP defendants’ 1965, Texas statutes conferencing30 Alright to an interpreter have guaranteed though useful, these criminal LEP defenmethods are merely as a right protected dants the right to gap-fillers, and do by both the U.S. appointed interpretnot fully replace a and Texas constitutions. ers, requiring payface-to-face inter31 ment of interpreters preter. Since at least 1965, from county funds.24 Texas statutes have In 2001, the Texas Telephonic guaranteed criminal LEP Government Code interpreting mandated the licensTelephonic interpretdefendants the right ing and certification ing is a quick, inexto appointed interpreters, of court interpreters, pensive way to have providing that “A language interpretarequiring payment court shall appoint a tion when a live inof interpreters from certified court interterpreter is not availcounty funds. preter or a licensed able.32 Telephonic court interpreter if a interpreting operates motion for the appointment of an interon the premise that “a good interpreter preter is filed by a party or requested by a at a distance is better than a bad one up witness in a civil or criminal proceeding close or none at all.”33 It is easier for court in the court.”25 staff to manage and it saves time that But the law has not addressed many interpreters can allocate to actual work of the issues facing LEP parties. Though rather than to travelling.34 The downcivil courts are more likely to encounter sides of telephonic interpreting are probmultiple languages in one proceeding, lems with accuracy and reliability, as well LEP parties in civil courts are not affordas the lack of trained and equipped court ed the same constitutional guarantees staff. Therefore, it is only recommended as criminal defendants. Even the “shall” for short proceedings.35 Studies also show language cited above has been interthat interpreters themselves generally preted not to mandate the appointment dislike telephonic interpreting because of interpreters in civil cases, but simply the lack of visual cues may affect the acto require that they be licensed once the curacy of the interpretation.36 26 court chooses to appoint them. Texas civil statutes27 and rules28 have addressed Video Remote Interpreting (VRI) some civil LEP issues, such as non-manA mobile VRI device, which operates datory appointment and payment of inover a phone line or via internet, is vastly terpreters, but more challenges remain. implemented in the medical field. When
“
”
28
March/April 2013
thehoustonlawyer.com
needed, it will connect the caller to a qualified interpreter to provide prompt and accurate patient care. So far, courts have not adopted this kind of mobile device in court-spoken language interpreting. In sign-language interpreting, however, VRI is prevalently used when an in-person interpreting is not immediately available. Voice over Internet Protocol (VOIP) VOIP is a relatively new technology that has gained wide acceptance and use.37 This technology operates on software over the internet to conduct the interpreting. The advantages are that it is cost effective, easy to install, accessible, and (most of all) mobile. The disadvantages are the risk of a possible dropped connection, slow speed, or poor quality, and sometimes VOIP cannot be controlled like other stationed equipment or systems.38 Courts’ Use of Interpreting Technologies As a result of the federal initiative brought by executive order and the Attorney General’s mandate,39 the U.S. Justice Department developed a “Language Access Assessment and Planning Tool for Federally Conducted and Federally Assisted Programs.” It requires federally-assisted programs to provide language assistance service, either oral or written,40 which could include in-language communications such as bilingual staff or interpreting (either in person, telephonically, via internet, or by video).41 In response, courts have implemented various procedures and technologies: (1) an “I Speak” card or Notice of Right video dubbed in foreign languages to identify interpreting needs for persons with LEP,42 (2) training for more qualified interpreters through partnership programs,43 (3) resource sharing among courts by building a network of PSIRCs (Public Service Interpreting Centers),44 or (4) use of speaker phones, digital audio platforms, specialized telephonic equipment, videoconferencing, and VOIP technology.45 A poignant example is Florida’s Ninth Circuit Court, which has implemented
Pro Bono in Houston…
Rebuilds Families…Helps Veterans . . .Provides Peace of Mind for Seniors
Equal Access Champions
The firms and corporations listed below have agreed to assume a leadership role in providing equal access to justice for all Harris County citizens. Each has signed a five-year commitment to provide representation in a certain number of cases through the Houston Volunteer Lawyers. America Inc. CenterPoint Energy, Inc. ConocoPhillips Exxon Mobil Corporation *Halliburton LyondellBasell Marathon Oil Company Shell Oil Company
Intermediate Firm Champions Gardere Wynne Sewell LLP Haynes and Boone, L.L.P. King & Spalding LLP
Mid-Size Firm Champions
Adams & Reese LLP Akin Gump Strauss Hauer & Feld LLP Baker & Hostetler LLP Beirne, Maynard & Parsons, L.L.P. Chamberlain, Hrdlicka, White, Williams & Aughtry Greenberg Traurig, LLP Jackson Walker L.L.P. Jones Day Morgan, Lewis & Bockius LLP Porter Hedges LLP
Mid-Size Firm Champions, cont. Strasburger & Price, L.L.P. Susman Godfrey LLP Weil, Gotshal & Manges LLP Winstead PC
Small Firm Champions
Abraham, Watkins, Nichols, Sorrels, Agosto & Friend Beck I Redden LLP Gibbs & Bruns LLP Hays, McConn, Rice & Pickering, P.C. Hughes Watters Askanase LLP Johnson DeLuca Kurisky & Gould, P.C. Kroger | Burrus *McGuireWoods LLP Schwartz, Junell, Greenberg & Oathout, L.L.P *Sidley Austin LLP LLP Sutton McAughan Deaver LLP
Boutique Firm Champions, cont.
Strong Pipkin Bissell & Ledyard, L.L.P. Wilson, Cribbs & Goren, P.C.
Solo Champions
Law Office of O. Elaine Archie Peter J. Bennett Law Office of J. Thomas Black, P.C. Law Office of Robbie Gail Charette Chaumette, PLLC Law Office of Papa M. Dieye The Ericksen Law Firm Frye, Steidley, Oaks & Benavidez, PLLC Fuqua & Associates, P.C. Bertrand C. Moser *Law Office of Brent C. Perry, P.C. Pilgrim Law Office Robert E. Price Cindi L. Robison Scardino & Fazel Shortt & Nguyen, P.C. Jeff Skarda Tindall & England, P.C. Diane C. Treich Norma Levine Trusch *The HBA welcomes these new firms to the Equal Access Champions program. thehoustonlawyer.com
March/April 2013
29
Cisco T3 Systems to integrate remote interpreting for short criminal proceedings instead of face-to-face interpretation.46 Orange County, California also has similar technologies.47 Fairfax County, Virginia has implemented an audio-video teleconferencing system by design firm Miller, Beam & Paganelli that is a more advanced version of VRI in which courts have control over the entire process during video interpreting.48 This advanced system facilitates easier interaction between the court and remote interpreters, and between the court and remote witnesses.49
Future Trends Mobile VRI With VRI gaining acceptance, current technology makes it more easily accessible and appealing to the general public. Software developers have created software that can handle these types of demands. For example, free or low-cost software such as AnyMeeting, Cisco WebEx meeting, AdobeConnect, and GotoMeeting or GotoMeeting Shared Space can be used over phones, over the internet, or on computers to allow large numbers of participants. With a few adaptations in functions, it will not be difficult to design a low-cost digital interpreting channel to suit court needs. Machine Interpretation Machine translation or interpretation has been a widely-discussed topic and has achieved significant growth in the past few decades. Machine translation, computer-assisted translation, electronic dictionaries, and voice-response translators are the four major translation modules.50 Statistics show that machine translation or interpretation has fewer errors in certain languages, such as Italian, than others, such as Hindi.51 The idea of machine interpretation basically relies on three steps of computer processing: speech recognition, text to machine translation, and text to speech. Current speech-recognition technol30
March/April 2013
thehoustonlawyer.com
ogy can achieve approximately 90 percent accuracy,52 and empirical studies or trials show that text-to-speech technology is amazingly accurate in English. Several speech-recognition-based commercial applications have made their way into our daily lives. These include navigation software, Google SearchTM, phone personal assistants such as Siri by Apple, Galaxy by Samsung, Google Voice Action by Google, Edwin Speech to Speech, SpeakToIt Assistant, and Vlingo. These applications have proven efficient, useful, and almost error-free for simple tasks such as directions and locations. Google Translate and Bing Translator have proven helpful with machine interpretation, but user experience is that they still cannot handle translation in long or complicated sentences in either source or target language. In recent years, Microsoft achieved a breakthrough in speech-recognition technology based on deep neural networks that may change the landscape for machine interpretation.53 In layperson’s terms, a deep neural network is an artificially engineered cognitive pattern that imitates the human brain. As such, it filters through ambiguous layers of interpretation of the received input using the context of the speech. Deep neural networks in computer science is based on the theory that “word sense disambiguation... can be accomplished by a neural network without prior linguistic analysis.”54 The neural network is constructed by word associations in their machinereadable dictionary definitions.55 Deep neural network is “a feed-forward, artificial neural network that has more than one layer of hidden units between its inputs and its outputs.”56 However, this method is not without its challenges, including the thousands of hours of data mining required and the need for development of effective speaker-adaptation techniques.57 Human Interpretation - Back in the Future? As of 2012, machines have not yet replaced humans to perform automated
language interpretation. However, once the weak link of speech-to-speech technology is perfected to almost 100 percent accuracy, a machine interpretation module will be possible in a more complex and commercially viable context. As such technology develops, the legal system will have to keep up to ensure meaningful access to justice. But the question remains whether such technology could replace human interpreters. Based on our research, this proposition still seems doubtful. Part of the reason may be human resistance to the idea itself. As one author stated, “Speech is perhaps the most human of all form of human expression. And that is what makes human interpreters essential.”58 Humans are apparently not yet willing to trust machines.59 This might be partly because language interpretation may require the use of interpersonal skills, critical thinking, and creativity that many believe machines will never accomplish. It has also been argued that a face-to-face interpreter whose livelihood depends on work quality would have more ethical accountability with the court than machines or their makers.60 Even if machines were not able to replace human interpreters, current literature in the field predicts that such technology will actually create more opportunities for “global” meetings and consequently more demand for interpreters.61 Furthermore, interpretation technology will certainly aid professional interpreters in doing a better job. Most importantly, when human interpreters are not readily available, technologicallyenhanced interpretation would become essential to LEP parties’ meaningful access to court. In short, as University of Maryland Professor Dr. Don Olcott, Jr. envisioned, it is likely that “[t]echnology will continue to be a powerful force in shaping our constitutional destinies, second only to our most valuable resource, human creativity.”62 Conclusion Houston’s growth as an international,
multilingual city will continue and this growth will become increasingly evident in our courts. We foresee that court interpreting technology worldwide will exponentially advance to assist interpreters, LEP parties, and the courts in bridging much of the language gaps that will inevitably take place in a multilingual courtroom. We finally anticipate and hope that, in the future, our legal system will continue to honor the best of our sound traditions of due process and equal protection for all, including equal access to court to parties with Limited English Proficiency. The Hon. Josefina M. Rendón is an Associate Municipal Judge and former District Judge. Born a U.S. citizen in San Juan, Puerto Rico, she has lived in Texas since her adolescence. Judge Rendon has conducted simultaneous Spanish-English court interpretations. She also has some knowledge of French and German. Lingling Dai is an attorney in Houston. Born and raised in China, Ling has lived in Texas
for 11 years. She graduated from South Texas College of Law. She also pursued post-graduate studies in applied linguistics at Beijing Language and Culture University. She speaks Mandarin and has some knowledge of French.
3.
4. 5. 6.
Endnotes 1.
2.
International Business: Global Advantage, GREATER HOUSTON PARTNERSHIP, available at http://. About Houston: Houston Facts and Figures, THE CITY OF HOUSTON’S OFFICIAL SITE FOR HOUSTON, TEXAS, available at abouthouston/houstonfacts.html.
7.
8.
Edward Russell, How Houston Became a Global City, NEXT CITY (Sept. 22, 2010), available at http:// nextcity.org/daily/entry/how-houston-became-aglobal-city. Global Advantage in Foreign Trade, supra n.1. About Houston: Houston Facts and Figures, supra n.2. George Hugh, Immigrant Growth Powers Houston As a Global City, PLANETIZEN (Sept. 24, 2010), available at. International Business: Gateway to Global Markets, GREATER HOUSTON PARTNERSHIP, available at gateway-to-global-markets.html. Jeannie Kever, Census shows record growth in foreignborn population (May 10, 2012), available at http:// w w w.chron.com /news/houston-texas/article /
JANINE, YOUR APPOINTMENT WITH
CANCER JUST ARRIVED
At the State Bar of Texas Insurance Trust, we know you can’t always prevent what comes your way, but you can prepare for it.
Sbotit.com /Houston Get a free, no obligation insurance quote.
bers of the Bar Exclusively protecting mem rly 40 years. and their families for nea
thehoustonlawyer.com
March/April 2013
31
Foreign-born-altering-face-of-U-S-Texas-3550385.php. Economic Development: Facts & Figures—Demographics, GREATER HOUSTON PARTNERSHIP, available at. 10. State & County QuickFacts: Houston, Texas, U.S. CENSUS BUREAU (last revised Jan. 10, 2013), available at http:// quickfacts.census.gov/qfd/states/48/4835000.html. 11. Id. 12. About Houston: Houston Facts and Figures, supra n.2. 13. Global Influence: From Houston to the World, BRIEFCASE MAGAZINE: UNIVERSITY OF HOUSTON LAW CENTER, vol. 31:1, at 6-8 (2012), available at http://. 14. Briefly Noted (Class of ‘15: From Arias to Zumba), BRIEFCASE MAGAZINE, supra n.13, at 19. 15. United States v. Carrion, 488 F.2d 12 (1st Cir. 1973). 16. E.g., Lupe S. Salinas, et al., The Right to Confrontation Compromised: Monolingual Jurists Subjectively Assessing the English-Language Abilities of Spanish-Dominant Accused, 18 AM. UNIV. J. OF GENDER & SOCIAL POLICY AND LAW 543, 543-561 (2010), available at viewcontent.cgi?article=1493&context=jgspl; Maxwell A. Miller, Hon. Lynn Davis, et al., Finding Justice In Translation: American Jurisprudence Affecting Due Process for People with Limited English Proficiency, 14 HARV. LAT. L. REV. 117 (2011); Elena M. de Jongh, Court Interpreting: Linguistic Presence v. Linguistic Absence, THE FLOR. BAR. J., July-Aug. 2008, available at JN/JNJournal01.nsf/Author/089C9FC08403FDF885 257471005ECF98; Virginia Benmaman, Interpreter Issues on Appeal, 9 PROTEUS (Newsletter of the Nat’l Assoc. of Judiciary Interpreters and Translators), Fall 2000, available at FAQarticleBenmaman.htm; Alex Rainof, The Lessons of the Méndez Case: Suggested Transcription, Translation and Interpretation Assessment Methodology for the Courts, 21 PROTEUS, Spring 2012, available at http://. php; Diana Cochrane, Note, ¿Cómo se Dice “Necesito un Intérprete”? The Civil Litigants Right to a Court Appointed Interpreter in Texas, 12 SCHOLAR 47 (St Mary’s L. Rev. on Minority Issues), Fall 2009, available at? action=DocumentDisplay&crawlid=1&doctype=cite& docid=12+SCHOLAR+47&srctype=smi&srcid=3B15&k ey=3cae550f7d395b824ed6733ecd1f030a. 17. 28 U.S.C. § 1827 (2012). 18. Title VI, 1964 Civil Rights Act, 42 U.S.C. § 2000(d) (2012). 19. Id. 20. Exec. Order No. 13,166, 65 Fed. Reg. 50, 121 (Aug. 16, 2000). 21. Office of the Attorney General, Memorandum, Federal Government’s Renewed Commitment to Language Access Obligations Under Executive Order 13166 (Feb. 17, 2011), available at EO_13166_Memo_to_Agencies_with_Supplement.pdf. 22. William E. Hewitt, Standards for Language Access in Courts (Feb. 6, 2012), available at. americanbar.org/content/dam/aba/administrative/ legal_aid_indigent_defendants/ls_sclaid_standards_ for_language_access_proposal.authcheckdam.pdf; William E Hewitt. et al., COURT INTERPRETATION: MODEL GUIDES FOR POLICY AND PRACTICE IN THE STATE COURTS (National Center for State Courts 1995), available at singleitem/collection/accessfair/id/162/rec/18; Title VI, 1964 Civil Rights Act, supra n.18. 23. Garcia v. State, 210 S.W.2d 574, 579 (Tex. Crim. App. 9.
32
March/April 2013
thehoustonlawyer.com
1948); Ex Parte Marez, 464 S.W.2d 866, 866 (Tex. Crim. App. 1971) (citing Pointer v. Texas, 380 U.S. 400 (1965)); Diaz v. State, 491 S.W.2d 166, 168 (Tex. Crim. App. 1973); Flores v. State, 509 S.W.2d 580, 581 (Tex. Crim. App. 1974); Garcia v. State, 149 S.W.3d 135, 141 (Tex. Crim. App. 2004); Ex parte Nanes, 558 S.W.2d 893, 894 (Tex. Crim. App. 1977); Garcia v. State, 149 S.W.3d 135, 140 (Tex. Crim. App. 2004). 24. Acts 1965, 59th Leg., vol. 2, p. 317, ch. 722; TEX. CODE CRIM. PROC. ANN. arts. 38.30 (Vernon Supp. 2002) (spoken-language interpreters), 38.31 (interpreters for the deaf). 25. TEX. GOV’T CODE ANN. § 57.002(a) (West Supp. 2012). 26. Salem v. Asi, No. 02-10-00295-CV, 2011 WL 2119640, at *2 (Tex. App.—Fort Worth May 26, 2011, no pet.); Op. Tex. Att’y Gen. No. JC-0584, 2002 WL31674922, at *14 (2002). 27. TEX. CIV. PRAC. & REM. CODE ANN. ch. 21 (Vernon Supp. 2011) (interpreters for the deaf, interpreters for Spanish in border counties, interpreters for county courts, and interpreter fee). 28. TEX. R. CIV. P. 183 (addressing courts’ power to appoint and fix interpreter compensation). 29. FED. JUDICIAL CENTER & THE NAT’L INST. FOR TRIAL ADVOCACY, EFFECTIVE USE OF COURTROOM TECHNOLOGY: JUDGE’S GUIDE TO PRETRIAL AND TRIAL 16 (Feb. 2001), available at. 30. Carola E. Green & Wanda Romberger, Leveraging Technology to Meet the Need for Interpreters (National Center for State Courts 2009), available at http:// cdm16501.contentdm.oclc.org/cdm/ref/collection/ accessfair/id/184. 31. Tom Pointon, Telephone Interpreting Service is Available, British Medical Journal, vol. 312, at 53 (Jan. 6, 1996), available at PMC2349723/?page=1. 32. Roberto A. Gracia-Garcia, Telephone Interpreting: A Review of Pros and Cons, available at. org/articles/Telephone_interpreting-pros_and_cons. pdf. 33. Id. 34. Id. 35. Id. 36. Id. 37. Tex. Remote Interpreter Project (TRIP), available at; Green & Romberger, supra n.30. 38. Green & Romberger, supra n.30. 39. See supra nn.20-21. 40. Language Access Assessment and Planning Tool for Federally Conducted and Federally Assisted Programs, Fed. Coordination and Compliance Sec., U.S. Dept. of Justice, at 5 (May 2011). 41. Id. 42. Rodney Olson, An Analysis of Foreign Language Interpreter Services Provided for the District Court in Cass County, N. Dakota and Improvement Recommendations (May 2009); Pamela Sanchez, New Mexico Justice System Interpreter Resource Partnership, Final Grant Report (Dec. 2010) (“I Speak” cards are cards distributed to LEP persons for them to identify which non-English language they speak). 43. Sanchez, supra n.42. 44. William E. Hewitt, Interpreter Resource Center for Justice System and Other Public Agencies: A Concept Paper NCSC (2004). 45. Green & Romberger, supra n.30. 46. Remote Centralized Interpreting presentation, NINTH JUDICIAL CIRCUIT COURT OF FLORIDA, available at.
47. Tracey
Clark, et al., Recommended Guidelines for Video Remote Interpreting (VRI) for ASI-Interpreted Events, JUDICIAL COUNCIL OF CALIFORNIA/ ADMINISTRATIVE OFFICE OF THE COURTS: COURT INTERPRETERS PROGRAM (2012), available at. 48. CTMS: Courtroom Technology Management System presentation, FAIRFAX COUNTY COURTHOUSE (revised Jan. 24, 2013), available at. fairfaxcounty.gov/courts/crto/pdf/ctmstrainingguide_ audiovideotelecon.pdf. 49. Id. 50. White Paper On Court Interpretation: Fundamental to Access to Justice, CONFERENCE OF STATE COURT ADMINISTRATORS, at 13 (Nov. 2007), available at CourtInterpretation-FundamentalToAccessToJustice. pdf. 51. Franz Josef Och, Statistical Machine Translation presentation at Google Faculty Summit (July 30, 2009), available at PzPDRPwlA. 52. Patrick Marshall, Speech-recognition Software Now Faster, More Accurate than Ever, THE SEATTLE TIMES (Mar. 23, 2012), available at html/businesstechnology/2017827402_ptspeech24. html. 53. Rich Rashid, Speech Recognition Breakthrough for the Spoken, Translated Word presentation (Nov. 8, 2012), available at (last visited Feb. 1, 2013). 54. Paul Deane, Book Review: Computational Lexical Semantics, 21 Computational Linguistics 593, 596, available at. 55. Id. 56. Geoffrey Hinton, et al., Deep Neural Networks for Acoustic Modeling in Speech Recognition, IEEE SIGNAL PROCESSING MAGAZINE, Nov. 2012, available at; Frank Seide, et al., Conversational Speech Transcription Using Context-Dependent Deep Neural Networks, Interspeech 2011, available at. microsoft.com/pubs/153169/CD-DNN-HMM-SWBInterspeech2011-Pub.pdf. 57. Seide, supra n.70, at 4. 58. Barry Olsen’s Remarks to the National Judiciary Interpreters and Translators Association, transcribed at Interpreting and the Digital Revolution, INTERPRETAMERICA (remarks delivered May 13, 2011), available at. com/2011/05/interpreting-and-digital-revolution.html. 59. Charles W. Bailey, Jr., Truly Intelligent Computers (Am. Library Assoc. 1992), available at. org/pub/lita/think/bailey.html; Spencer Ackerman, The Pentagon Doesn’t Trust Its Own Robots, WIRED, Sept. 11, 2012, available at dangerroom/2012/09/robot-autonomy/; Sharon Weinberger, Next Generation Military Robots Have Minds Of Their Own, BBC, Sept. 28, 2012, available at; Katie Scott, Robot Taught to Think for Itself, WIRED UK, Aug. 2, 2011, available at. 60. COURT INTERPRETATION: MODEL GUIDE, supra n.22 at 184-185. 61. Olsen, supra n.58. 62. Holly Mikkelson, Plus ça change... The Impact of Globalization and Technology on Translator/Interpreter Education (originally presented as keynote address Oct. 29, 1999), available at pluschng.htm (quoting Dr. Don Olcott).
The Law Firm of the Future
By Toby Brown
P
eering into a crystal ball on the future of the practice of law, one thing seems clear: Things need to change. And more specifically, lawyers need to change the way they practice law. Of all the talk of new technologies and new competitors in the legal market, there is a basic assumption that the status quo cannot hold. So then the question becomes: What does that change look like and how do we get there? Taking a step back, it makes sense to review what is driving this change. The market is putting intense pressure on lawyers of all types. In-house counsel are under direct pressure to lower costs. Large firm lawyers are under pressure from in-house counsel clients to reduce rates and fees. Solo/small firm lawyers are now competing with well-funded document forms companies. Even government lawyers are being pressed to find ways to deliver legal services that are better-faster-cheaper. From experience in the large firm market, the pressure to adapt is intensifying. Alternative fee arrangements (AFAs) are one manifestation of this pressure. At its foundation, what we are witnessing is a shift from a sellers’ market to one that is truly competitive. Whereas lawyers in the past could rely on being good lawyers to have a successful practice, now they must wear many hats and serve in many demanding roles. So this look into the future is no small challenge. Years back I used to give law practice management seminars and categorized law practice functions into five topics: human resources (HR), facilities, financial management, marketing and technology. For this exploration, I will tackle each topic with an eye toward what each function might be like for a future firm.
Human Resources (HR) Law firms are in the knowledge business and people are the means of production
in the form of knowledge workers. The to dedicated conference space as the future firm will do better to view all of only option for client meetings. Worktheir people as knowledge workers, exing floors are properly restricted to firm pecting each person to add value to their personnel for both security and conficlient services regardless of whether they dentiality reasons. This of course means bill their time. Current practices where the working space no longer needs to impeople are expected to be sitting at their press clients and can be configured much desk to have value will need to be redifferently. Although high-rent locations examined. Telecommay be retained, muting and other large offices with A fundamental work arrangements many windows will challenge for lawyers will motivate and enno longer need to be able employees to dethe norm. For good looking to embrace liver the most in the examples of what the future is what least amount of time. this might look like, Law firm personnel check out the acI call The Paradigm will also need concounting industry or of Precedence... tinual professional even the investment development in orbanking industry. der to constantly Here offices occupy ...Lawyers need drive up their value the center of a floor, to the firm’s clients. with cubicles and to break from this On the lawyer side work spaces aligned of personnel, every along the windows. way of thinking lawyer does not need This allows for priand re-align to be on a partner vacy for executives track. Our future and makes better use their businesses firm will happily emof exterior spaces, alwith the needs ploy “staff lawyers” lowing light in and who have great lawreducing the overof today and yer skills and are fulall amount of space tomorrow’ s clients. filled performing as a needed. lawyer – versus runAs noted above The result will be ning a business. The in the HR section, more satisfied clients “up or out” model for future firms will all lawyers will be set embrace telecomand a healthy aside. muting, enabling sigpractice. In too many firms, nificant cost savings HR has become a on space. Firms that compliance function, enforcing leave reach this level of enlightenment will policies and administering benefits. In have realized the true value that space the future, HR will need to embrace a taladds to their services. ent role. In this role, HR’s job will be constantly increasing each person’s value in Financial Management the firm. This, in turn, will increase the Traditional law firm finances focus on value of services provided to clients. billing rates and realization for revenue. On the expense side, the four major costs Facilities (a.k.a. Office Space are people, space, insurance and everyand Equipment) thing else. The future law firm will have to seriously For revenue, the future firm needs to reconsider the value of its office space. shift to an orientation more focused on Many firms have already transitioned profit margin. Here rates and realiza-
“
”
thehoustonlawyer.com
March/April 2013
35
tion may still matter, however, the better question becomes: What is the cost of goods sold versus the revenue received? A basic challenge for firms of all sizes is that partner compensation is, by definition, profit. Yet partners are also workers. So firms will need new profitability methods for separating partner compensation into a wages component and a profit component. This revised margin approach will drive a reexamination of the various costs incurred, making sure they each drive value and are efficiently deployed. The bottom line of financial management is the bottom line on the balance sheet. Of course, expense control will continue to be important to future firms. However, much like in the past, this effort should not consume too much valuable leadership time. Our future firm will put reasonable overhead cost controls in place using effective oversight and then perform check-ups annually. This sounds simple, yet too many lawyers focus too much time on expense control because they view every dollar spent as coming out of their pocket. The reality is that time spent on revenue growth has significantly more impact on the bottom line than time spent managing expenses. Marketing The days of using attorney bios as THE source of marketing are over. This outdated approach assumes clients hire lawyers for one reason: expertise. And it assumes a bio is sufficient for demonstrating the value proposition of expertise to a client market. In the future firm, marketing takes on its full business definition, including advertising, business development and sales. In this new, truly competitive market for services, law firms will need to do more marketing. The impulse to cut marketing to save on overhead will need to be avoided. Our future firm will retain people, internally or on contract, who have marketing expertise... and let them do their jobs. Too many times firms hire great marketing people, and then relegate them to event 36
March/April 2013
thehoustonlawyer.com
planning and newsletters. As lawyers recommend that their clients listen to their legal counsel, so should lawyers trust the counsel of their marketing professionals. A new, emerging layer of marketing is utilizing social media and other webbased options. Embracing these tools will be critical to the future law firm given the relationship nature of legal services. Do not think of these tools as replacements for existing relationship activities, but instead as enhancements and extensions for further building client relationships. So far law firms have been very slow to adopt these new approaches, evidenced by the slow uptake of legal content blogs. Our future firm will be willing to take risks and experiment with various social media platforms. Technology For our future firm, we will split technology into two sub-categories: back office and front office. Back office technology is that which runs the law firm. It is practice management, financial management, document management and the various technologies that keep a practice functioning. The future firm will have all of these applications hosted on-line, in the cloud. Law firms have never been the best IT shops and this cloud approach will get them out of the IT business. Of course, security of client information will be an issue. However, I suggest that reputable cloud providers have far more secure technology than an average law firm. Front-office technology will also be better positioned in the cloud, but for a different reason. Interactive tools are the future. Being able to co-edit documents with a client, or interact on projects in real-time, will have tremendous value. Our future firm may even opt to build an app, allowing clients to interact with the law firm via smart phones and other mobile devices. Beyond moving technology to the cloud, future firms will need to integrate next-generation technologies into the
way they practice. One example is the document analysis engine provided by KIIAC (). This technology is able to read volumes of documents and extract the clause structure for a document type and determine the most standard language for each clause. This tool effectively replaces the functions of a lawyer, and is able to do it on a much larger scale. Where a lawyer might review a few past document examples to create a clean draft, a tool like KIIAC can easily analyze hundreds of documents, giving a much more reliable work-product. KIIAC is just an example of the type of technology future firms should seek out and embrace. Back to Business Basics In covering our five topics, we have essentially begun the task of drawing up a business plan for our future firm. This foundation of doing things differently could easily be fleshed out into a complete plan for the firm of the future. In fact, some firms are already doing this (e.g., Clearspire, Valorem Law Group, etc.). A fundamental challenge for lawyers looking to embrace the future is what I call The Paradigm of Precedence. The basic idea is that lawyers have been trained in, and practice daily an approach of “look to the past to determine the present.” This way of thinking has crept into the way lawyers run their businesses and impacts many of their daily decisions. If you have ever heard the response “what are other firms doing?” to a proposal, then you have witnessed this in action. Lawyers need to break from this way of thinking and re-align their businesses with the needs of today and tomorrow’s clients. The result will be more satisfied clients and a healthy practice. Toby Brown is Director of Strategic Pricing & Analytics at Akin Gump Strauss Hauer & Feld, L.L.P. He also maintains the American Bar Association award-winning blog, 3 Geeks and a Law Blog, with two colleagues at www. geeklawblog.com.
Houston Bar Foundation Recognizes Outstanding Efforts by Volunteers Ballard Takes Office as 2013 Chair
T
he Houston Bar Foundation marked its 30th year of service with an Annual Meeting and Luncheon held February 7 at the Four Seasons Downtown. The luncheon not only commemorated the installation of new officers, but also recognized the contributions of volunteers who provide pro bono legal representation and other services to the community. Glenn A. Ballard, Jr. of the Houston office of Bracewell & Giuliani LLP took office as 2013 Chair of the Houston Bar Foundation. Welcoming him and serving as keynote speaker was Rudolph W. Giuliani, name partner in the New York office of Bracewell & Giuliani, former Mayor of the City of New York and former Presidential candidate. Ballard succeeded Robert J. McAughan of Sutton McAughan Deaver PLLC as chair of the Foundation. Gregory Ulmer of Baker Hostetler LLP was elected vice chair of the Houston Bar Foundation and Craig Glidden of LyondellBasell Industries was elected treasurer. New directors are David Brinley of Shell Oil Company; William R. Buck of Rudy Giuliani welcomed his law partner, Glenn A. Ballard, Jr., Exxon Mobil Corporation; and Denise Scofield of Morgan, Lewis & as the 2013 Chair of the Houston Bar Foundation. Bockius LLP. Completing terms as directors are Christopher J. Arntzen of CenterPoint Energy; Chris Popov of Vinson & Elkins LLP; and John Eddie Williams of Williams Kherker Hart Boundas, LLP. McAughan will serve on the board as immediate past chair. McAughan presented the Foundation’s annual awards for pro bono service through the Houston Volunteer Lawyers (HVL), volunteer mediation services through the Dispute Resolution Center (DRC), and legal writing in The Houston Lawyer, the HBA’s professional journal.
Presentation of James B. Sales Pro Bono Leadership Award Robert J. McAughan presented the James B. Sales Pro Bono development of the Juvenile Justice Mock Trial Program, which has Leadership Award to Kay Sim, executive director of the HBA. provided legal services to nearly 7,500 veterans and served as a model for statewide programs. Photos by Temple Webber thehoustonlawyer.com
March/April 2013
37
Andrius Kontrimas accepted the award for Fulbright & Jaworski L.L.P. for Outstanding Contribution to HVL by a Large Firm for the 13th consecutive year. Awards were presented by 2012 Houston Bar Foundation Chair, Robert J. McAughan
Stephen Moll accepted the award for Gardere Wynne Sewell LLP for Outstanding Contribution to HVL by an Intermediate Firm.
Dan Hedges accepted the award for Porter Hedges LLP for Outstanding Contribution to HVL by a Mid-size Firm.
Michael Richardson accepted the award for Beck|Redden for Outstanding Contribution to the HVL by a Small Firm.
Lynne Kamin and Joan Jenkins accepted the award for Jenkins & Kamin, L.L.P. for Outstanding Contribution to the HVL by a Boutique Firm.
Benjamin Ederington accepted the award for LyondellBasell, one of two corporations honored for Outstanding Contribution to HVL. 38
March/April 2013
thehoustonlawyer.com
Peter J. Bennett was honored for the third time for Outstanding Contribution to HVL by a Solo Practitioner.
Jack Balagia accepted the award for Exxon Mobil Corporation, one of two corporations honored for Outstanding Contribution to HVL, and Susan Sanchez of Exxon Mobil was one of two pro bono coordinators honored for Outstanding Contribution to HVL.
Monica Karuturi of LyondellBasell was one of two pro bono coordinators honored for Outstanding Contribution to HVL.
Francine Barton earned her second consecutive honor for Outstanding Contribution to the Dispute Resolution Center.
James Greenwood III was honored for Longevity of Exemplary Service to the Dispute Resolution Center.
Caroline C. Pace was honored as the author of the Outstanding Legal Article in The Houston Lawyer for 2012.
Only Lawyers Can Fill the Need. Volunteer Today. thehoustonlawyer.com
March/April 2013
39
COMMITTEE SPOTLIGHT
The Lawyers Against Waste Committee Makes Houston a Greener and Cleaner City By Polly Graham
The Houston Lawyer
D
thousand trees during his term. His comuring the recent drought, the mitment was deeply needed in Houston. Houston Arboretum & Nature Urban forests play a critical role in the Center—a preserve nestled health and economics of metropolitan within Memorial Park—lost life by increasroughing air and water ly half of its quality, fostering tree canopy. The wildlife diversity, Lawyers Against and moderating Waste Committemperatures. tee stepped in to This results in help, and coordithe equivalent of nated a fundraishundreds of miling drive that led lions of dollars in to the purchase of HBA members and their families planted over 1,000 trees pollution removal over a thousand at the Houston Arboretum and Nature Center. and carbon stortrees to replenage every year in ish the depleted Texas.1 But unforforest area. On Arbor Day, over tunately, a recent 200 volunteers Forest Service arrived to begin report shows that settling the sapthe tree canopy lings into their is steadily declinnew home. It was ing in urban areas the largest plantacross the Uniting by a single ed States,2 and group in the ArHouston is at the boretum’s history. HBA President Brent Benoit and Lawyers Against Waste top of the charts.3 Working in two Committee Chair Laura Gibson. Committee hour shifts, the Chair Laura Gibvolunteer lawyers son, a founding slowly changed partner of Ogden, the landscape of Gibson, Broocks, dried brush and Longoria & Hall, bare trees into a L.L.P, put the terrain budding plan into action. with new life. In a joint letter, HBA PresiLaura and Brent dent Brent Ben- Students from South Texas College of Law helped with reg- reminded bar oit spearheaded istration. members that the the initiative by promising to plant one Houston Arboretum “[p]lays a vital role
40
March/April 2013
thehoustonlawyer.com
in protecting native plants and animals in the heart of the city where development threatens their survival,” and “[p]rovides education about the natural environment to Houstonians of all ages.” As a result of their efforts, the committee easily exceeded its fundraising goals. The tangible legacy is a contribution to a healthy urban forest that will benefit Houston for many years to come. For over a decade now, the committee has been working to transform the local community. Last fall, dozens of volunteer lawyers arrived to clean-up the DawsonLunnon Cemetery, a small tract located just a few miles southeast of downtown. The historic plot once served as the burial ground for the Mt. Gilead Missionary Baptist Church, but over time had been all but forgotten. The volunteers cleared heavy brush and trash, uncovered headstones, and planted 15 trees in the process. And the cemetery now has a new trail adjacent to Brays Bayou leading to the grave stones of individuals who lived at the turn of century. Contact Claire Nelson (ClaireN@hba. org) or Rocio Rubio (RocioR@hba.org) or call the HBA office at 713-759-1133 for more information about the Lawyers Against Waste Committee. Polly Graham is is an associate in the appellate group at Haynes and Boone, L.L.P. and a member of The Houston Lawyer Editorial Board. Endnotes 1. 2.
3.
USDA Forest Service, Urban Forest Data, data/urban/state/?state=TX (last visited February 9, 2013). David Nowak & Eric Greenfield, Tree and Impervious Cover Change in U.S. Cities, 11 URBAN FORESTRY & URBAN GREENING 21, 21 (2012). Id. at 23.
A Profile
in pro f e s s io n ali s m
The Hon. Marc C. CarteR Judge, 228th Criminal District Court
“There but for the grace of God go I.”
A
s a criminal district court judge, and judge of the Harris County Veterans Court, I see the worst and the best of our community: I see families ravaged by crime; I see people who have lost everything as a result of their addiction to drugs and alcohol; I see chronically mentally ill men and women come through the courthouse doors day after day; I see men and women that have fought this country’s wars come home broken, isolated, afraid and eventually charged with crimes. That of course is the worst, but I also see the best of our community: I see the victims of crime forgive their trespassers, and state that by forgiving, they are now free to embrace life and able to love those around them; I see drug addicts and alcoholics go into treatment programs and regain their ability to control their
addictions; I see the chronically mentally ill treated and become competent, happy, and productive citizens. In addition, I see our fighting men and women regain their sense of pride, honor, and duty. These struggles and victories are why I am a lawyer and serve as judge for Harris County. There are a host of tremendous individuals and agencies that come together to create these victories: There are the professionals at the Harris County District Attorney’s Office who seek justice and not convictions; there are defense lawyers who take the time to know their client, their family, and their story. Additionally, there are all the support agencies such as the Probation Department, the VA Medical Center, MHMRA, and countless other entities that work together to make our community safer and our lives better. I want to thank every one of you for your hard work, patience, and commitment to justice.
thehoustonlawyer.com
March/April 2013
41
OFF THE RECORD
Jammin’ with
The Writ Kickers By Erika Anderson
Since then The Writ Kickers have recruited Steve Ferrell, the band’s newest member, and have continued to practice and refine their signature bluesy, folk-rock-style sound. They’ve he members of The Writ Kickers share a long love of played over a dozen concerts. They’ve played at outdoor fesmusic. Judge Bill Burke has been playing guitar for tivals in Old Town Spring and as the opening act for a local more than 50 years. He also plays the harmonica and theater company. They’re even semi-regulars at Puffabelly’s in the banjo. Steve Ferrell has been singing all his life Old Town Spring, as a group and as individuals. and playing guitar for 41 years. Judge Brent Gamble Asked how they find the time for regular practice and perforhas been playing guitar for 49 years. Judge Patricia Kerrigan has mances between their busy schedules as lawyers and judges, always loved to sing, but sang for the first time in public only Judge Kerrigan laughed and a few years ago at a judge’s said simply that they make CLE. She also sometimes time because it’s so much provides percussion, and is fun. That sense of fun is cerworking on adding an intainly evident to the audistrument to her repertoire. ence when you watch them The members of The Writ play. They’re a talented Kickers have played off group, and they clearly enand on together for years, joy the spotlight. separately and with other You can find videos of The musicians, at judicial holiWrit Kickers’ performances, day parties and for fun on and some impromptu jam weekends, only occasionsessions, by searching for ally playing serious songs. The Writ Kickers on YouThe first time Judges Burke, Tube and on Facebook. The Gamble and Kerrigan band also updates its Faceplayed together was at the book page with information 2009 judicial holiday party. about future performances. Then in January 2011 the You can catch them, togethas-yet-unnamed band had er or separately, at Thursday the opportunity to put on a The Writ Kickers: front row, Steve Ferrell and Judge Patricia Kerriopen mic nights at Puffabelserious public performance gan; back row, Judge Bill Burke and Judge Brent Gamble. ly’s in Old Town Spring. The at the Houston Livestock band is also available for events! Show and Rodeo. They practiced, and posted on Facebook looking for a name. A friend, Cheri Duncan, suggested “The Erika Anderson is an associate with The Stinemetz Writ Kickers” and the name stuck. The band members sucLaw Firm, PLLC and a member of The Houston Lawyer cessfully launched their musical side careers with a 30 minute editorial board. intermission set at the Rodeo’s Drilling & Grilling tent.
The Houston Lawyer
T
42
March/April 2013
thehoustonlawyer.com
LEGAL TRENDS
New Civil Procedure Rules from the Texas Supreme Court By Chance A. McMillan
O
n March 1, 2013, the Texas Rules of Civil Procedure received two new rule additions that could potentially affect a large portion of the State Bar’s litigation practice. One rule, Texas Rule of Civil Procedure (“TRCP”) 91a, created a new procedure allowing a party to file a motion to dismiss on causes of action that have no basis in law or fact.1 The other rule, TRCP 169, will streamline cases involving claims of $100,000.00 or less to trial with limited discovery rules.2 The court also amended TRCP 47, 190, 190.2, and 190.5 to adhere to the expedited trial process.3 The following is a brief summary of the changes. New Rule: Texas Rules of Civil Procedure 91a (Dismissal Rule) In order to utilize TRCP 91a, a party must file a Rule 91a Motion to Dismiss within 60 days of receiving the pleading containing the inadequate cause of action.4 The court must then grant or deny the motion within 45 days after the motion is filed.5 The court may (but is not required to) conduct a hearing before ruling on the motion.6 The court must dismiss a cause of action if it has “no basis in law or fact.”7 A cause of action has no basis in law if
the allegations do not entitle the claimant to the relief requested.8 A cause of action has no basis in fact if “no reasonable person could believe the facts pleaded.”9 When considering the motion, the rule forbids the court from considering any evidence; the court must focus solely on the believability of the pleading when dismissing the cause of action.10 Once the court grants or denies the motion, it must then award the prevailing party attorneys fees.11 TRCP 91a is unique. It introduces a new believability standard that will allow the court to subjectively look at a party’s pleadings and dismiss a cause of action before it ever reaches a jury.12 It also forbids the court from considering evidence that could potentially aid the court in the believability of the cause of action.13 Rules of Civil Procedure 169 (Expedited Actions) The Supreme Court amended TRCP 47 to force parties to plead into one of five specific categories ranging from $100,000.00 or less to over $1,000,000.00.14 Failure to plead into one of the specific categories will preclude the offending party’s discovery until the pleading is adequately amended.15 If a party seeks monetary relief of $100,000.00 or less, the case is governed by the new rule, TRCP 169.16 Under the framework of TRCP 169, and upon request of either party, the court must set a trial date for the case for within 90 days after the discovery period ends.17 The discovery period begins when the case is filed and ends 180 days after the first request for discovery is served on a party.18 So, in theory, if a party files discovery with its petition, the court must set the case for trial, at the latest, 9 months after the pleading and discovery is filed with the court. Amendments to TRCP 190.2 also introduce limited written discovery rules to adhere to the expedited trial process.19
For cases seeking under $100,000.00, the parties now will be limited to 15 interrogatories, 15 requests for productions, and 15 requests for admission.20 Also, each party is only allowed five hours total to complete jury selection, opening statements, direct examination and cross-examination of witnesses, and closing argument.21 TRCP 169 appears to be plaintiff’s friendly. After all, it is the party seeking relief that decides how much to plead and whether to subject itself to the expedited action.22 There is no mechanism in TRCP 169 that allows for a defendant to force a party into an expedited action if the party affirmatively pleads above $100,000.00 in its initial pleading. For more information on the new rules check the Texas Supreme Court website or contact the court’s rules attorney, Marisa Secco at marissa.secco@txcourts. gov. Chance A. McMillan is an associate with Thomas N. Thurlow & Associates located in Houston, Texas. His practice is dedicated to personal injury and civil litigation. He is a member of The Houston Lawyer Editorial Board. Endnotes Tex. R. Civ. P. 91(a). Tex. R. Civ. P. 169. 3. Tex. R. Civ. P. 47; Tex. R. Civ. P. 190; Tex. R. Civ. P. 190.2.; Tex. R. Civ. P. 190.5. 4. Tex. R. Civ. P. 91(a)(3)(a). 5. Tex. R. Civ. P. 91(a)(3)(c). 6. Tex. R. Civ. P. 91(a)(6). 7. Tex. R. Civ. P. 91(a)(1) 8. Id. 9. Id. 10. Tex. R. Civ. P. 91(a)(6). 11. Tex. R. Civ. P. 91(a)(7). 12. Tex. R. Civ. P. 91(a)(1) 13. Tex. R. Civ. P. 91(a)(6). 14. Tex. R. Civ. P. 47. 15. Tex. R. Civ. P. 47, cmt. 1. 16. Tex. R. Civ. P. 169(a)(1). 17. Tex. R. Civ. P. 169(d)(2). 18. Tex. R. Civ. P. 190.2(b)(1). 19. Tex. R. Civ. P. 190.2. 20. Tex. R. Civ. P. 190.2(b)(1-5). 21. ex. R. Civ. P. 169(d)(3). 22. Tex. R. Civ. P. 169(a)(1). 1.
2.
thehoustonlawyer.com
March/April 2013
43
LEGAL TRENDS
Texas Supreme Court Reinforces the Distinction Between Contract and Tort Claims By Suzanne R. Chauvin
The Houston Lawyer
F
or the past 25 years, the Texas Supreme Court has struggled to clarify the boundary between contract and tort claims.1 In a recent case, the Court provided further guidance in the area of “contort,” that is, the “overlapping domain of contract law and tort law.”2 On June 15, 2012, in El Paso Marketing, L.P. v. Wolf Hollow I, L.P.,3 the Court held that a gas-fueled power plant could not maintain a negligence claim against a gas pipeline because the claim was contractual in nature. The power plant had argued that it could sue the pipeline under a tort theory because there was no contractual privity between the pipeline and the power plant, and the pipeline had allegedly delivered gas that caused property damage to the plant’s equipment. The Texas Supreme Court concluded that the pipeline’s duty to deliver gas of a certain quality arose out of contracts, although not directly between the power plant and the pipeline, and that the 44
March/April 2013
thehoustonlawyer.com
plant’s claims sounded in contract, not in tort. The case required the Court to analyze a series of complex, interrelated contracts for the supply and delivery of natural gas to the plant. Wolf Hollow owned and operated a gas-fueled electric generating plant, and hired El Paso Marketing, L.P. to manage the gas fuel supply under a Supply Contract. The Enterprise pipeline delivered raw natural gas to the plant under a Transportation Agreement that was originally executed by Wolf Hollow, but Wolf Hollow immediately assigned the Transportation Agreement to El Paso as required by the Supply Contract. The Supply Contract also required El Paso to deliver gas meeting certain quality specifications, and if it did not, El Paso was required to assign to Wolf Hollow any claim it might have against Enterprise for breach of the Transportation Agreement. Wolf Hollow complained of four brief service interruptions, and contended that it sometimes received gas contaminated with heavy liquid hydrocarbons, causing damage to sensitive plant equipment. Wolf Hollow sued El Paso for breach of contract and Enterprise for negligence. Wolf Hollow sought damages for plant repairs and equipment upgrades to pre-
vent future harm to the plant, as well as damages for replacement power purchased during shutdowns for repairs and upgrades. Wolf Hollow chose not to accept a re-assignment of the Transportation Agreement in order to assert a breach of contract claim against Enterprise, because the Transportation Agreement contained damage waivers that would have precluded recovery of all the damages Wolf Hollow sought against Enterprise. In concluding that Wolf Hollow could not assert a claim for negligence against Enterprise, the Court looked to the source of Enterprise’s duty and the injury alleged to have occurred. The Court reaffirmed the distinction between claims sounding in contract and those sounding in tort: “[t]ort obligations are in general obligations that are imposed by law—apart from and independent of promises made and therefore apart from the manifested intention of the parties—to avoid injury to others.”4 The Court found that Wolf Hollow alleged violations of specific obligations that existed only in the Supply Contract and the Transportation Agreement. Therefore, Wolf Hollow was not asserting that Enterprise had failed to act as a reasonable pipeline should have— that is, the liability standard for negli-
15, 2012, “OninJuneEl Paso Marketing, L.P. v.
Wolf Hollow I, L.P., the Court held
that a gas-fueled power plant
could not maintain a negligence claim against a
gas pipeline because
the claim was contractual
”
in nature.
LEGAL TRENDS
gence. Instead, Wolf Hollow was alleging that Enterprise violated obligations imposed by contract, not by law. The Court also noted that, if Wolf Hollow had sued Enterprise for breach of contract, Wolf Hollow would not be able to recover the damages it sought because they were consequential damages, and were waived under the Transportation Agreement. The Court declined, however, to allow Wolf Hollow to expand its remedies against Enterprise after assigning away the contract that created the obligations and remedies in the first place. After determining that Wolf Hollow could not pursue a negligence claim against Enterprise, the Court addressed Wolf Hollow’s claims against El Paso. The Court found that Wolf Hollow’s claims for damage to its plant were all consequential, and were precluded by the parties’ contracts. The
Court further found that Wolf Hollow’s claims for replacement-power damages were not direct damages, because they derived entirely from the agreements Wolf Hollow had with its customers, and therefore, did “not flow necessarily from plant shutdowns due to gas supply problems.”5 Nevertheless, the Court found that the language of the Supply Agreement contemplated replacement power as a cover standard, precluding summary judgment against Wolf Hollow based on the consequential damages waiver.6 This decision is the latest in a line of cases addressing the overlap between contract and tort law, and the distinctions between claims and remedies for each. In the El Paso case, the Texas Supreme Court has reinforced the source of the defendant’s duty in determining whether a claim sounds in contract or in tort.
Suzanne R. Chauvin is a partner in the Houston office of Strong Pipkin Bissell & Ledyard, L.L.P. Her litigation practice includes commercial, environmental, and products liability matters in state and federal courts. She is a member of the Defense Research Institute and is a Fellow of Litigation Counsel of America. She is a member of The Houston Lawyer Editorial Board. Endnotes 1.
2.
3. 4.
5. 6.
See, e.g., Sw. Bell Tel. Co. v. DeLanney, 809 S.W.2d 493 (Tex. 1991); Jim Walter Homes v. Reed, 711 S.W.2d 617 (Tex. 1986). BLACK’S LAW DICTIONARY 365 (9th ed. 2004). See also Erin Hopkins, Contort vs. Tort: Are We There Yet?, The Houston Lawyer, July/August 2011, at 16. 383 S.W.3d 138. 383 S.W.3d at 142-143 (quoting DeLanney. 809 S.W.2d at 494, and W. PAGE KEETON, DAN B. DOBBS, ROBERT E. KEETON & DAVID G. OWEN, PROSSER AND KEETON ON THE LAW OF TORTS § 92 at 655 (5th ed. 1984)). 383 S.W.3d at 144. El Paso filed a Motion for Rehearing, which was denied on December 14, 2012.
at the bar JUDICIAL Investitures
The Hon. Ryan Patrick was sworn in as judge of the 177th District Court on March 1, 2013 by the Hon. Eric Andell, senior judge, First Court of Appeals. He was joined by his wife, Kellie Patrick.
The Hon. Kristin M. Guiney was sworn in as judge of the 179th District Court on February 21, 2013 by the Hon. Susan Brown of the 185th District Court. She was joined by her children, Cate and Abby McClees, and husband, Ed McClees.
thehoustonlawyer.com
March/April 2013
45
Media Reviews
Government Control of News: A Constitutional Challenge By Corydon B. Dunham iUniverse, 2011
The Houston Lawyer
T
Reviewed by Suzanne R. Chauvin oday, we can instantly access news from such wide-ranging sources as MSNBC, Fox, the Huffington Post, and the Drudge Report, all with a click of a mouse. But with so many sources of information, we often forget the days before the 24-hour news cycle, when major news stories came through only three networks, and we trusted Walter Cronkite, Chet Huntley and David Brinkley to present accurate, unbiased news. In those days, the Federal Communication Commission’s Fairness Doctrine required radio and television news broadcasters to present contrasting views on important and controversial public issues. In Government Control of News: A Constitutional Challenge, Corydon Dunham argues that the FCC’s proposed “Localism, Balance and Diversity Doctrine” would violate the First Amendment guarantees of freedom of speech and freedom of the press, and establish a new era of federal regulation over the content of speech. Dunham, who served as executive legal counsel for NBC from 1965 to 1990, argued that the proposed new doctrine threatens to limit freedom of the press by placing broadcasters under fear of costly investigations, fines, and 46
March/April 2013
thehoustonlawyer.com
the potential loss of FCC licenses. In his opinion, the proposed doctrine will bring back the worst abuses of the Fairness Doctrine which, he believed, chilled free speech through intimidation by the FCC and other regulators. While the Localism, Balance and Diversity Doctrine seems unlikely to be adopted as proposed in 2008—as an FCC-sponsored study recommended that the doctrine proceeding be ended in 2 0 1 1 — D u n h a m ’s book, nevertheless, contains insights into clashes between the media, Congress, and different political administrations from a seemingly bygone news era. Dunham’s Government Control of News is at its best when recounting colorful stories of the early days of network news and government efforts to influence reporting. He describes the historic coverage of the Republican and Democratic conventions of 1948, when the conventions were carried live to only nine cities through coaxial cable. He recounts how during the Republican Convention, an engineer from the local NBC affiliate climbed to the roof of the convention center during a thunderstorm to hold antennas in place so that the network could continue to broadcast. He also recounts how an African-American delegate at the Democratic Convention caused an uproar when he demanded that the convention refuse to seat the Mississippi delegation, who had threatened to walk out if the convention adopted a strong civil rights platform. From these dramatic beginnings, networks continued to broadcast controversial content, sometimes resulting in confrontations with presidents and con-
gressional leaders. For example, when the networks covered stories about protests and violence at the Chicago Democratic Convention of 1968, Congress initiated an investigation to determine whether the coverage was biased. After reviewing films, thousands of documents, and hearing testimony from witnesses, Congress filed a complaint with the FCC charging that the coverage was biased and staged. Congress also began a lengthy investigation of CBS during the Vietnam War after the network aired a documentary about military spending on public relations, and after CBS refused to produce outtakes of film. In 1971, the Nixon Administration held meetings with each of the networks, threatening antitrust suits if the tenor of coverage did not change. The administration followed through on its threats when there was no change in the coverage. Although Dunham’s dystopian vision of FCC control over news content sounds, at first, far fetched, his examples of past governmental abuses are thought provoking. It remains to be seen whether the Localism, Balance, and Diversity Doctrine will be adopted; nevertheless, this book illustrates the tension between the government and the news media, and argues strongly against any regulation of content, regardless of the policy motivations behind such regulations. Suzanne R. Chauvin is a partner in the Houston office of Strong Pipkin Bissell & Ledyard, L.L.P. Her litigation practice includes commercial, environmental, and products liability matters in state and federal courts. She is a member of The Houston Lawyer Editorial Board.
Media Reviews
The Law of Superheroes By James Daily & Ryan Davidson Gotham Books, 320 pages
T
Reviewed by Robert Painter he Law of Superheroes brings back a lot of memories. It reminds me of a time when the law was new to me. It reminds me of a time when I had the luxury to think about the philosophy behind laws, rather than just their practical implications. And, of course, it reminds me of the fun world of superheroes. Each chapter of the book deals with broad areas of the law, including constitutional law, criminal law, evidence, criminal procedure, tort law and insurance, contracts, business law, administrative law, intellectual property, travel and immigration, and international law. The last two chapters, though, are particularly fun, dealing with the uniquely superhero legal issues confronting immortality, alter egos, and resurrection, and superhuman intelligence. What makes The Law of Superheroes a decidedly more fun read than a typical law book is the stories. Jury psychologists frequently remind us that stories make it easier for people to understand, sympathize, and retain information—by turning facts into narrative. They also make
things more interesting. And some of the best-selling fiction in the world involves stories about superheroes. Each chapter discusses an area of law in the context of stories involving superheroes. The stories are other-worldly, but the legal analysis is real. Imagine, for instance, the issue of immigration, which mobilizes opinions all over the political and legal spectrum, being debated in the world of superheroes. In one chapter on travel and immigration, the authors take up a discussion about the legal implications of the abilities of Superman, the Flash, and Nightcrawler, to go anywhere they want—pretty much instantly—under their own powers. The reader finds himself wondering about the significance of Superman having to cross over national borders while Nightcrawler avoids this issue by simply disappearing on one side of the border and reappearing on the other. One of the most interesting legal hurdles raised in the travel and immigration chapter is the United States Munitions List (USML). In the USML, the federal government lists weapons and related technology that are restricted from export. The USML makes it illegal to export a gas turbine specifically designed for use in a ground vehicle. Thus, Batman could not take his Batmobile abroad. For Iron Man, pretty much his whole suit would be export-restricted. That can really crimp a superhero’s style! The constitutional chapter raises equally interesting legal issues. Should Batman, for example, be considered a state actor? In 1982, the United States Supreme Court held in Lugar v. Edmonson Oil Co. that, under certain circumstances, pri-
vate individuals may be considered state actors. In that case, the Supreme Court set out the following standard: first, does the action have its source from a right or privilege created by state authority, and second, can one fairly describe the private party as a state actor. Sometimes Batman springs into action when he sees the Bat Signal sent out by the Gotham City Police Department. One can make a good argument for why Batman in this context should be considered a state actor. After all, as an undercover billionaire, Batman is not on Gotham City’s payroll, but he does respond to the City’s calls for help. The book indulges both sides of the argument equally convincingly. The Law of Superheroes allows the civil litigator to think beyond his highly specialized legal practice and go into the many other legal specialties he does not often encounter by positing them through the highly entertaining world of superheroes. Robert Painter is a trial lawyer with the Painter Law Firm PLLC, where he handles plaintiff medical malpractice cases. He is an associate editor of The Houston Lawyer.
the
HBAserves you Enhance your practice
Try the HBA advantage. thehoustonlawyer.com
March/April 2013
47
lawyer THE HOUSTON
48
March/April 2013
thehoustonlawyer.com
THE HOUSTON
Mary Chavoustie
lawyer
LITIGATION MARKETPLACE The Houston Lawyer
Heights/I-10 - Beautifully remodeled 2-story building just minutes from Downtown Houston now offering executive legal offices with access to conference rooms, a full-time receptionist, Wi-Fi/ phone/internet included, starting as low as $700/month with short term leases available. Please call 713-861-3595.Galleria – Post Oak Blvd. – Class lands. All inquiries held in strictest A Building. 1 or 2 Attorney offices confidence. Please reply to and 1 Secretarial space with access TxBizLaw@yahoo.com to full amenities: reception lobby, receptionist, kitchen/lunchroom, Office Space AV Rated Firm Seeks Of Counsel library and conference room. Attorneys. AV-rated law firm in MIDTOWN – 3000 SMITH Space shared with 6 attorneys in a Sublease with established law firm, prestigious firm. Furnished. Call greater Houston area seeks experienced, accomplished attorneys inpartner office (20 X 12). Use of ameStephanie, 713-626-3700 terested in a virtual of counsel arnities, reception area, kitchen, conrangement to grow or supplement ference room, free covered parking. For Sale their existing practices. Firm has Call: 713-524-2400. strong web presence and of counI AM RETIRING sel attorneys are listed on firms’ Office furniture, books, equipment Great office space at 1601 WesSpecializing in Financial Fraud, website. Successful candidates for sale. Terms available, 1390 theimer at Mandell, minutes from Asset Discovery, Due Diligence, Background, will have developed expertise in square feet in Galleria Financial downtown Houston. Rent includes and White Collar Criminal Matters. shared access to two conference Center, Landlord will extend lease. a particular practice area, received rooms, kitchens, internet, cable, Large corner office, reception area, training at an AV-rated firm, and Serving Corporate and Legal Communities Worldwide with the utmost discretion. phones with VM, all utilities, part- kitchen, file room, secretary space, have a good academic record. PracOur offices are staffed by professional time receptionist. Window offices call 713-961-3241 for appointment tice areas of interest include famintelligence specialists with experience to inspect. ranging from $400-$1,000 per ily law, personal injury, fiduciary garnered from premier international month with no long-term comlitigation, corporate, real estate, government agencies. Positions Available mitment. Please call Mark Kidd at intellectual property, employee 2323 South Shepherd, Suite 920 713-968-4601 for information. Experienced litigation attorney benefits, and commercial litiga- 713.520.9191 • noukas@noukasintel.com sought by firm with a focus on tion. Compensation based on busi Museum Dist/Montrose office space. multi-family, landlord representa- ness referred between of counsel Use of 3 conf rooms, kitchen. Re- tion. Competitive pay. Great en- attorney and firm members. Firm ceptionist to greet your clients. 22 vironment. Stable and repeat cli- is also interested in self-sufficient offices in two remodeled homes. ent base. Send confidential resume candidates who want an office with Rents start at $450 per month. Ask togreenwaylawyer@hotmail.com. the firm. Please reply to for Macon Strother 713-781-0778. TxBizLaw@yahoo.com Litigation firm with a focus on 713.621.1180 WEST LOOP - MEMORIAL PARK ArturosUptown.com landlord representation is seekProfessional Services Large window office available with view of arboretum. Minutes from ing experienced litigation paraleTicket and DWI defense, traffic wardowntown via Memorial Drive. gal/ legal secretary. Competitive pay with great working environrant removal, DPS license hearings, ocAmenities include telephone, fax, ment. Send confidential resume to cupational driver’s licenses, and drivcopier, internet, coffee bar, law ligreenwaylawyer@hotmail.com. er’s license issues. Robert W. Eutsler. brary, use of conference rooms and reception services. Secretarial serBusiness Firm Seeks Two PartTel. 713-464-6461. vices also available. Free parking. ners. Small AV-rated business firm Additional space available for secretarial/paralegal use. Please call in The Woodlands positioned for For classifieds advertising, growth, seeks two self-sustaining 713/622-6223. please contact: attorneys to complement practice. Prior experience in a boutique, Small one room office with phone, fax, copier, postage machine in- midsized or large firm with a recluded in the rent. Centrally lo- cord of advancement and strong mary@quantumsur.com cated. 1635 Dunlavy, Houston, Tx. academic record required. Must 281.955.2449 ext.13 be team-oriented and willing to 77006. (713) 528-9070 Document Examiner
From page 8
to Harris County courts that his office will help implement. Finally, Judge Josefina Rendón and Lingling Dai address the changing demographics of Harris County and the growing need for adequate court interpretation. Our guest editors for this issue were Sammy Ford and Farrah Martinez. They tackled this challenging topic with gusto and managed to beat deadlines (no small feat for our Editorial Board!). Their leadership and hard work, along with the work of our entire Board, brought this interesting issue to life. Endnotes 1.
2.
3.
Emma Roller, L.A. Times Magazine Story From 1988 Predicts Life in 2013, predictions_about_life_in_2013_from_1988.html (March 15, 2013). On an entirely unrelated note, new lawyers joining your firms this year likely were born in 1988 or thereabouts. While I am not what many would consider a seasoned lawyer, this nevertheless makes me feel old. Nicole Yorkin, A Day In The Life, Los Angeles Times Magazine 7 (April 3, 1988), available at.
PLACEMENT POLICY The Placement Service will assist HBA members by coordinating placement between attorneys and law firms. The service is available to HBA members and provides a convenient process for locating or filling positions..
4. To reply for a position available, send a letter to Pplacement Coordinator at the Houston Bar Association, 1300 First City Tower, 1001 Fannin St., Houston, TX 77002 or e-mail Brooke Benefield at BrookeE@hba.org. Include the code number and a resume for each position. The resume will be forwarded to the firm or company. Your resume will not be sent to your previous or current employers.
1,000
lane
13710 treeban k lane houston , texas 77070 tel: 28 1.894. 8608 cell: 71 3.202. 2994 gsl007 @sbcg lobal. net
m arke
tin g co mm
un ic at
io ns
n
Brare 65 Arthur 1.440.36 Tel. 28 36 5 1.440.49 s.com x 9057 Fax 28 las n fifiberg to Hous defi@de 0 29 ss.com Texas 77 la rg fifibe
P.O. Bo
full color business cards (both sides, front & back. design services available)
for only $
84.95
281-955-2448 ext.11 Sheron R. “Sam” Sheppard Co-owner / president
3101 Highway 59 N. Shepherd, TX 77371 WyVac, Inc. | Texas | USA
Office: (936) 628-1210 Cell: (713) 805-5720 Fax: (936) 628-1207 sam.sheppard@wyvac-inc.com
leo@quantumsur.com
5080 SEEKING ASSOCIATE LEGAL COUNSEL for Houston public pension fund. Approx. 4 years’ experience with retirement plans, employee benefits, administrative law, institutional investing or Texas local government law required. Background checks and drug testing. EOE.
1. To place an ad, attorneys and law firms must complete a registration record. Once registration is complete, your position wanted or available will be registered with the placement service for six months. If at the end of the six-month period you have 5094 ESTATE PLANNING not found or filled your position, it will be – PROBATE ATTORNEY. your responsibility to re-register with the SUGAR LAND. Board certified service in writing.
3. In order to promote the efficiency. PLEASE NOTIFY THE PLACEMENT COORDINATOR OF ANY POSITION FOUND OR FILLED.
gary
Positions Available.
attorney, 33 year Houston area practice serving Harris/Fort Bend counties, seeking associate attorney with advanced estate planning and probate experience. Positions Wanted, Rating Agencies, Trustees, Servicers and Special Servicers. Looking for in-house position. 2066 2008 graduate of University of Texas Law, licensed in Texas with interest in civil litigation, and especially labor and employment. Summa cum laude B.A. in political science from Middlebury College. Worked for Texas Supreme Court during law school. Strongest assets are analytical, research, and writing skills. Looking for permanent position or temp-to-perm opportunity. If you need information about the Lawyer Placement Service, please contact HBA, placement coordinator, at the HBA office:
thehoustonlawyer.com
713-759-1133 March/April 2013
49
placement service
from the editor | https://issuu.com/leosur/docs/thl_marapr13 | CC-MAIN-2017-30 | refinedweb | 22,269 | 54.52 |
Opened 9 years ago
Closed 9 years ago
#2763 closed defect (fixed)
[patch] Inspect db crashes with MySQL date "0000-00-00" - Quick fix included
Description
When using inspectdb to create a model of a legacy database (s postnuke stuff) the program encounters a timestamp value of "0000-00-00". This is valid in MySQL, but the code hangs on it, so i just added an if. It could be more elegant, but it works.
This is the patch:
--- util.py 2006-09-19 02:53:48.000000000 -0300
+++ util.py-patched 2006-09-19 02:42:39.000000000 -0300
@@ -43,6 +43,7 @@
###############################################
def typecast_date(s):
+ if s == "0000-00-00" : return None
return s and datetime.date(*map(int, s.split('-'))) or None # returns None if s is null
def typecast_time(s): # does NOT store time zone information
Attachments (5)
Change History (17)
comment:1 Changed 9 years ago by wam-djangobug@…
Changed 9 years ago by wam-djangobug@…
Patch to address MySQL's NULL value format of time/date stamps.
Changed 9 years ago by wam-djangobug@…
same as prior attachment, but minus the debugging print statement.
comment:2 Changed 9 years ago by wam-djangobug@…
- Summary changed from Inspect db crashes with MySQL date "0000-00-00" - Quick fix included to [patch] Inspect db crashes with MySQL date "0000-00-00" - Quick fix included
Per comment on the attachment page, I'm updating the summary. Apparently, you can't make a change to the ticket properties without an associated comment or Trac thinks it's a spam posting.
Note that this bug/fix is a duplicate of ticket:2369
Changed 9 years ago by wam-djangobug@…
alternative patch which limits change to only affecting the mysql backend.
Changed 9 years ago by wam-djangobug@…
Update to prior patch to handle Zero times, as well as date and datetime stamps
Changed 9 years ago by wam-djangobug@…
same content of last patch, but with different name and diff options to get interactive viewing to work
comment:3 Changed 9 years ago by adrian
The latest patch (django-mysql-zero-typcast-ticket2763.diff) looks good for committing, but I'm holding off on committing because the Django/Oracle sprint is happening right now, and they're heavily refactoring that general area of the code. Let's check it in after they're done with the Oracle stuff. Thanks for the patch!
comment:4 Changed 9 years ago by adrian
- Component changed from django-admin.py to django-admin.py inspectdb
comment:5 Changed 9 years ago by mmccarty@…
Is this patch in the trunk?
comment:6 Changed 9 years ago by mmccarty@…
nm... I didn't see the comment on 11/04/06 by adrian.
comment:7 Changed 9 years ago by Simon G. <dev@…>
- Patch needs improvement set
- Triage Stage changed from Unreviewed to Ready for checkin
Marked as ready for checkin as per Adrian's comment, but patch possibly needs improvement if there were any major changes post-sprint (unlikely)
comment:8 Changed 9 years ago by Simon G. <dev@…>
- Needs tests set
comment:9 follow-up: ↓ 10 Changed 9 years ago by Jens
I think i have a similar problem. I got the error "year is out of range".
My table shows like this:
CREATE TABLE `pylucid_groups` ( `id` int(11) NOT NULL auto_increment, `pluginID` int(11) NOT NULL default '0', `name` varchar(50) collate latin1_general_ci NOT NULL default '', `section` varchar(50) collate latin1_general_ci NOT NULL default '', `description` varchar(50) collate latin1_general_ci NOT NULL default '', `lastupdatetime` datetime NOT NULL default '0000-00-00 00:00:00', `lastupdateby` int(11) default NULL, `createtime` datetime NOT NULL default '0000-00-00 00:00:00', PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci
I have values in this table with dattime '0000-00-00 00:00:00'.
inspectdb crashes at this point:
File ".\django\db\backends\mysql\introspection.py", line 15, in get_table_description cursor.execute("SELECT * FROM %s LIMIT 1" % quote_name(table_name))
Quick and dirty hack at this point
Index: management.py =================================================================== --- management.py (revision 4542) +++ management.py (working copy) @@ -762,6 +762,9 @@ relations = introspection_module.get_relations(cursor, table_name) except NotImplementedError: relations = {} + except Exception, e: + yield "Error: %s" % e + continue try: indexes = introspection_module.get_indexes(cursor, table_name) except NotImplementedError:
Yes, this fixed nothing, but i have the result from the other tables ;)
comment:10 in reply to: ↑ 9 Changed 9 years ago by anonymous
I think i have a similar problem. I got the error "year is out of range".
Sorry, i found this
comment:11 Changed 9 years ago by Andy Dustman <farcepest@…>
comment:12 Changed 9 years ago by mtredinnick
- Resolution set to fixed
- Status changed from new to closed
The patch above only handled dates, but date-timestamp columns were still causing an error. I've attached an updated diff (against subversion revision 3735) of django/trunk/django/db/backends/util.py which applies the same strategy (special case if clause) for both Date and DateTime fields. | https://code.djangoproject.com/ticket/2763 | CC-MAIN-2015-40 | refinedweb | 836 | 52.39 |
Technical Plenary and WG Meeting Event, Royal Sonesta Hotel - Cambridge, MA USA
At the
6-7 March 2003 - Semantic Web Architecture Meeting
will be/was a discussion of possible work on query languages for RDF. The quoted background reading of which includes:
From, specifically Andy and Alberto's use cases and EricP summary of the various languages, a pattern starts to emerge, in which a lot of very similar languages vary in certain features but remain.
There was remarkable consistency in the abstract form of the query for abroad range of positive Horn(?) graph match query languages. Three levels appear:
but one notes that all levels can be covered by an unrestricted syntax, and services characterized by the restrictions they impose on the graph. An open question is the level or levels to be standardized.
An interesting comparison is with the RuleML project which aims to integrate many non-RDF rule languages. It also uses a generic syntax with multiple sublanguages (see DTDs) and the categories of sublanguage should be compared. The languages being considered are not webized in that they do not in general use URIs to identify things, individuals and predicates are identified by PCDATA strings. Connections to RDF include the addition of a ur form of constant (for some reason, rather than simply the use of URIs as identifiers), and the move toward the RDF reification of a ruleML query. This reification may be able to be converged with reification of an RDF query language. It seems evident that translation between an RFD query standard and RuleML will be straightforward. It isn't clear the extent to which the result will be re-exportable into the various rule engines. Conversion in the reverse way would require the supply of namespace URIs, and the conversion of Naries to combinations of Binaries.
Things which are not covered by these levels, are the ability to distinguish matches from difference data sources within the query, to be able to take action on a particular data source not containing a given piece of information. These are not covered here.
Clearly these are interconvertible and have different advantages and disadvantages, incompletely tabulated below. It may be best to require support for more than one or even all three.
The syntax choices (except for Versa's path-specific syntax) were mainly independent of the above semantic distinctions. Most query languages had non-XML compact syntax, which is not surprising given that even the native XML Query language uses a non-XML syntax. The non-XML examples differed in various styles
Among XML syntaxes are two basic approaches: one, to wrap regular RDF syntax with punctuation to make it a rules language, adding variables and the grouping statements of the query and return template; two, to reify a query in great and quite verbose detail. The latter methods is very explicit, but for clarity in an RFD world would best take the form of RDF itself. It seems that attempts to use XML for RuleML in an RDF-compatible way led Harold Boley to conclude that XML should be changed.
The semantics of the queries chiefly differ along two axes:
Various different query systems had quite different powers of inference: several simply query a static database, several do a query with built-in use of certain axioms such as OWL axioms, some precompute an index of transitive closure, class membership, and so on. However, for all these differences in the deductive power of the store, the operation of query could always conceptually be considered to be a straightforward graph match query on some conceptual data store which was the deductive closure of the data under the kind of inference supported. [ref Pat Hayes presentation to DAML-PI meeting] .
This concept can be extended include the support or otherwise of built-in functions: they do not, either, change the form of the query language.
Therefore, the RDF query language can be defined independently of the specification of the inference levels of the service.
It does make sense to make an ontology of the types of service offered, for example OWL-complete service, and to define relationships between datasets with or without various forms of inference. It seems that this is a lower priority, and less advanced. The need for standards is not so acute
In the case that a given query service supports optional powers of inference, then one would expect a description of the service requested to be sent with the query, but that it would use the same ontology.
Built-in functions such as arithmetic and string operations, and web access are a classic standardization problem and indeed many existing libraries exist and should be referenced. Existing systems which have libraries include the XQ set of functions (many of which are not XQ-specific), and the cyc and prolog libraries. This work is very connected with datatype definitions, and as datatypes (with the noted exception of rational numbers) are defined by XML Schema, it could be hoped that definitions and for the datatype operations should be provided by other work, with some effort required to reference them for use in the RDF query language.
One a query language is defined, the mechanism or mechanisms for accessing a query service should be fairly straightforward to define on top of remote access protocols such as raw HTTP+GET, and/or SOAP 1.2. ad
So, if work were begun in this area, formally or informally, more or less in chronological order, one might hope to see:
Future work which would extend the query language one could image being done in parallel but is not at such an advanced stage of development and need for standardization as the basic query language includes:
There is a need for RDF Query not only as a stand-alone langauge for RDF systems, but also for
Background reading above and all references
R. V. Guha et al, Enabling Inferencing, position paper for the W3C Query Languages meeting in Boston, December 3-4th 1998.. Mentions many of the points made above. (and other papers from that workshop.)
XML Query
RuleML. Links within the text above may help the reader find various aspects of this project.
RFML
[] Minutes if any from DAML-PI rules breakout meeting.
[] Rational numbers have been defined by CC/PP and are used in UAProf profiles, although a complete definition of operators on them does not exist as far as I know (2003-03) | http://www.w3.org/2001/sw/meetings/tech-200303/query | CC-MAIN-2014-42 | refinedweb | 1,075 | 55.17 |
On 04/18/2013 11:35 AM, Laine Stump wrote: >> +# Path to the setuid helper for creating tap devices. This executable >> +# is used to create <source type='bridge'> interfaces when libvirtd is >> +# running unprivileged. libvirt invokes the helper directly, instead >> +# of using "-netdev bridge", for security reasons. >> +#> + >> + > > Are we sure we want to allow this to be configured? That could lead to > some "interesting" troubleshooting incidents :-) About the only time it would be configured is if qemu is installed in an alternate location. > > On the other hand, I guess the path to qemu itself is right there in the > domain config file, so how much worse could this be... Yeah, sometimes we've got to just trust the user to not be insane. > > ACK. (But I'd like at least one other ACK from someone else due to the > fact that this is polluting the config namespace with something we would > like to eventually eliminate.) Even if we add a way for libvirt to get the tap device without depending on qemu's helper program, we'll have to leave the config item present (so we don't reject an older .conf file as invalid), but we can then ignore the entry at that point. I can live with this change going in, so I agree with your ACK, and have pushed it. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2013-April/msg01436.html | CC-MAIN-2016-30 | refinedweb | 242 | 70.53 |
Steve Nelson wrote: > Indeed - as I now have a function: > > def nsplit(s, n): > return [s[i:i+n] for i in range(0, len(s), n)] > > Incidentally I am currently going with: > > def nsplit(s, n): > while s: > yield s[:n] > s = s[n:] You can write the generator function to use the same method as the list comp and avoid creating all the intermediate (partial) lists: def nsplit(s, n): for i in range(0, len(s), n): yield s[i:i+n] One of the cool things about generators is that they make it so easy to maintain state between yields. In Python 2.4, all you have to do is rewrite your original list comp to a generator comprehension: def nsplit(s, n): return (s[i:i+n] for i in range(0, len(s), n)) Kent | https://mail.python.org/pipermail/tutor/2006-March/045720.html | CC-MAIN-2016-30 | refinedweb | 141 | 56.56 |
Sparse NDArrays with Gluon¶
When working on machine learning problems, you may encounter situations where the input data is sparse (i.e. the majority of values are zero). One example of this is in recommendation systems. You could have millions of user and product features, but only a few of these features are present for each sample. Without special treatment, the sheer magnitude of the feature space can lead to out-of-memory situations and cause significant slowdowns when training and making predictions.
MXNet supports a number of sparse storage types (often called
stype for short) for these situations. In this tutorial, we’ll start by generating some sparse data, write it to disk in the LibSVM format and then read back using the LibSVMIter for training. We use the Gluon API to train the model and leverage sparse storage types such as
CSRNDArray and RowSparseNDArray to maximise performance and memory efficiency.
import mxnet as mx import numpy as np import time
Generating Sparse Data¶
You will most likely have a sparse dataset in mind already if you’re reading this tutorial, but let’s create a dummy dataset to use in the examples that follow. Using
rand_ndarray we will generate 1000 samples, each with 1,000,000 features of which 99.999% of values will be zero (i.e. 10 non-zero features for each sample). We take this as our input data for training and calculate a label based on an arbitrary rule: whether the feature sum is higher than average.
num_samples = 1000 num_features = 1000000 data = mx.test_utils.rand_ndarray((num_samples, num_features), stype='csr', density=0.00001) # generate label: 1 if row sum above average, 0 otherwise. label = data.sum(axis=1) > data.sum(axis=1).mean()
print(type(data)) print(data[:10].asnumpy()) print('{:,.0f} elements'.format(np.product(data.shape))) print('{:,.0f} non-zero elements'.format(data.data.size))
<class 'mxnet.ndarray.sparse.CSRNDArray'> [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] 1,000,000,000 elements 10,000 non-zero elements
Our storage type is CSR (Compressed Sparse Row) which is the ideal type for sparse data along multiple axes. See this in-depth tutorial for more information. Just to confirm the generation process ran correctly, we can see that the vast majority of values are indeed zero. One of the first questions to ask would be how much memory is saved by storing this data in a
CSRNDArray versus a standard NDArray. Since sparse arrays are constructed from many components (e.g.
data,
indices and
indptr) we define a function called
get_nbytes to calculate the number of bytes taken in memory to store an array. We compare the same data stored in a standard
NDArray (with
data.tostype('default')) to the CSRNDArray.
def get_nbytes(array): fn = lambda a: a.size * np.dtype(a).itemsize if isinstance(array, mx.ndarray.sparse.CSRNDArray): return fn(array.data) + fn(array.indices) + fn(array.indptr) elif isinstance(array, mx.ndarray.sparse.RowSparseNDArray): return fn(array.data) + fn(array.indices) elif isinstance(array, mx.ndarray.NDArray): return fn(array) else: TypeError('{} not supported'.format(type(array)))
print('NDarray:', get_nbytes(data.tostype('default'))/1000000, 'MBs') print('CSRNDArray', get_nbytes(data)/1000000, 'MBs')
NDarray: 4000.0 MBs CSRNDArray 0.128008 MBs
Given the extremely high sparsity of the data, we observe a huge memory saving here! 0.13 MBs versus 4 GBs: ~30,000 times smaller. You can experiment with the amount of sparsity and see how these two storage types compare. When the number of non-zero values increases, this difference will reduce. And when the number of non-zero values exceeds ~1/3 you will find that this sparse storage type take more memory than dense! So use wisely.
Writing Sparse Data¶
Since there is such a large size difference between dense and sparse storage formats here, we ideally want to store the data on disk in a sparse storage format too. MXNet supports a format called LibSVM and has a data iterator called LibSVMIter specifically for data formatted this way.
A LibSVM file has a row for each sample, and each row starts with the label: in this case
0.0 or
1.0 since we have a classification task. After this we have a variable number of
key:value pairs separated by spaces, where the key is column/feature index and the value is the value of that feature. When working with your own sparse data in a custom format you should try to convert your data into this format. We define a
save_as_libsvm function to save the
data
(CSRNDArray) and
label (
NDArray) to disk in LibSVM format.
def save_as_libsvm(filepath, data, label): with open(filepath, 'w') as openfile: for row_idx in range(data.shape[0]): data_sample = data[row_idx] label_sample = label[row_idx] col_idxs = data_sample.indices.asnumpy().tolist() values = data_sample.data.asnumpy().tolist() label_str = str(label_sample.asscalar()) value_strs = ['{}:{}'.format(idx, value) for idx, value in zip(col_idxs, values)] value_str = " ".join(value_strs) sample_str = '{} {}\n'.format(label_str, value_str) openfile.write(sample_str)
filepath = 'dataset.libsvm' save_as_libsvm(filepath, data, label)
We have now written the
data and
label to disk, and can inspect the first 10 lines of the file:
with open(filepath, 'r') as openfile: lines = [openfile.readline() for _ in range(10)] for line in lines: print(line[:80] + '...' if len(line) > 80 else line)
0.0 35454:0.22486156225204468 80954:0.39130592346191406 81941:0.1988530308008194... 1.0 37029:0.5980494618415833 52916:0.15797750651836395 71623:0.32251599431037903... 1.0 89962:0.47770974040031433 216426:0.21326342225074768 271027:0.18589609861373... 1.0 7071:0.9432336688041687 81664:0.7788773775100708 117459:0.8166475296020508 4... 0.0 380966:0.16906292736530304 394363:0.7987179756164551 458442:0.56873309612274... 0.0 89361:0.9099966287612915 141813:0.5927085280418396 282489:0.293381005525589 ... 0.0 150427:0.4747847020626068 169376:0.2603490948677063 179377:0.237988427281379... 0.0 49774:0.2822582423686981 91245:0.5794865489006042 102970:0.7004560232162476 ... 1.0 97133:0.0024336236529052258 109855:0.9895315766334534 116765:0.2465638816356... 0.0 803440:0.4020800292491913
Some storage overhead is introduced by serializing the data as characters (with spaces and colons).
dataset.libsvm is 250 KBs but the original
data and
label were 132 KBs combined. Compared with the 4GB dense
NDArray though, this isn’t a huge issue.
Reading Sparse Data¶
Using LibSVMIter, we can quickly and easily load data into batches ready for training. Although Gluon Datasets can be written to return sparse arrays, Gluon DataLoaders currently convert each sample to dense before stacking up to create the batch. As a result, LibSVMIter is the recommended method of loading sparse data in batches.
Similar to using a DataLoader, you must specify the required
batch_size. Since we’re dealing with sparse data and the column shape isn’t explicitly stored in the LibSVM file, we additionally need to provide the shape of the data and label. Our LibSVMIter returns batches in a slightly different form to a
DataLoader. We get
DataBatch objects instead of
tuple.
data_iter = mx.io.LibSVMIter(data_libsvm=filepath, data_shape=(num_features,), label_shape=(1,), batch_size=10) for batch in data_iter: data = batch.data[0] print('data.stype: {}'.format(data.stype)) label = batch.label[0] print('label.stype: {}'.format(label.stype)) break
data.stype: csr label.stype: default
We can see that
data and
label are in the appropriate storage formats, given their sparse and dense values respectively. We can avoid out-of-memory issues that might have occurred if
data was in dense storage format. Another benefit of storing the data efficiently is the reduced data transfer time when using GPUs. Although the transfer time for a single batch is small, we transfer
data and
label to the GPU every iteration so this time can become significant. We will time the
transfer of the sparse
data to GPU (if available) and compare to the time for its dense counterpart.
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
%%timeit data_on_ctx = data.as_in_context(ctx) data_on_ctx.wait_to_read()
192 microseconds +- 51.1 microseconds per loop (mean +- std. dev. of 7 runs, 1 loop each)
print('sparse batch: {} MBs'.format(get_nbytes(data)/1000000)) data = data.tostype('default') # avoid timing this sparse to dense conversion print('dense batch: {} MBs'.format(get_nbytes(data)/1000000))
sparse batch: 0.001348 MBs dense batch: 40.0 MBs
%%timeit data_on_ctx = data.as_in_context(ctx) data_on_ctx.wait_to_read()
4 ms +- 36.8 microseconds per loop (mean +- std. dev. of 7 runs, 100 loops each)
Although results will change depending on system specifications and degree of sparsity, the sparse array can be transferred from CPU to GPU significantly faster than the dense array. We see a ~25x speed up for sparse vs dense for this specific batch of data.
Gluon Models for Sparse Data¶
Our next step is to define a network. We have an input of 1,000,000 features and we want to make a binary prediction. We don’t have any spatial or temporal relationships between features, so we’ll use a 3 layer fully-connected network where the last layer has 1 output unit (with sigmoid activation). Since we’re working with sparse data, we’d ideally like to use network operators that can exploit this sparsity for improved performance and memory efficiency.
Gluon’s nn.Dense block can used with CSRNDArray input arrays but it doesn’t exploit the sparsity. Under the hood, Dense uses the FullyConnected operator which isn’t optimized for
CSRNDArray arrays. We’ll implement a
Block that does exploit this sparsity, but first, let’s just remind ourselves of the Dense implementation by creating an equivalent
Block called
FullyConnected.
class FullyConnected(mx.gluon.HybridBlock): def __init__(self, in_units, units): super(FullyConnected, self).__init__() with self.name_scope(): self._units = units self.weight = self.params.get('weight', shape=(units, in_units), init=None, allow_deferred_init=True, dtype='float32', stype='default', grad_stype='default') self.bias = self.params.get('bias', shape=(units), init='zeros', allow_deferred_init=True, dtype='float32', stype='default', grad_stype='default') def hybrid_forward(self, F, x, weight, bias): return F.FullyConnected(x, weight, bias, num_hidden=self._units)
Our
weight and
bias parameters are dense (see
stype='default') and so are their gradients (see
grad_stype='default'). Our
weight parameter has shape
(units, in_units) because the FullyConnected operator performs the following calculation:
We could instead have created our parameter with shape
(in_units, units) and avoid the transpose of the weight matrix. We’ll see why this is so important later on. And instead of FullyConnected we could have used mx.sparse.dot to fully exploit the sparsity of the
CSRNDArray input arrays. We’ll now implement an alternative
Block called
FullyConnectedSparse using these ideas. We take
grad_stype of the
weight as an argument (called
weight_grad_stype), since we’re going to change this later on.
class FullyConnectedSparse(mx.gluon.HybridBlock): def __init__(self, in_units, units, weight_grad_stype='default'): super(FullyConnectedSparse, self).__init__() with self.name_scope(): self._units = units self.weight = self.params.get('weight', shape=(in_units, units), init=None, allow_deferred_init=True, dtype='float32', stype='default', grad_stype=weight_grad_stype) self.bias = self.params.get('bias', shape=(units), init='zeros', allow_deferred_init=True, dtype='float32', stype='default', grad_stype='default') def hybrid_forward(self, F, x, weight, bias): return F.sparse.dot(x, weight) + bias
Once again, we’re using a dense
weight, so both
FullyConnected and
FullyConnectedSparse will return dense array outputs. When constructing a multi-layer network therefore, only the first layer needs to be optimized for sparse inputs. Our first layer is often responsible for reducing the feature dimension dramatically (e.g. 1,000,000 features down to 128 features). We’ll set the number of units in our 3 layers to be 128, 8 and 1.
We will use timeit to check the performance of these two variants, and analyse some MXNet Profiler traces that have been created from these benchmarks. Additionally, we will inspect the memory usage of the weights (and gradients) using the
print_memory_allocation function defined below:
def print_memory_allocation(net, block_idxs): blocks = [net[block_idx] for block_idx in block_idxs] weight_nbytes = [get_nbytes(b.weight.data()) for b in blocks] weight_nbytes_pct = [b/sum(weight_nbytes) for b in weight_nbytes] weight_grad_nbytes = [get_nbytes(b.weight.grad()) for b in blocks] weight_grad_nbytes_pct = [b/sum(weight_grad_nbytes) for b in weight_grad_nbytes] print("Memory Allocation for Weight:") for i in range(len(block_idxs)): print('{:7.3f} MBs ({:7.3f}%) for {:<40}'.format(weight_nbytes[i]/1000000, weight_nbytes_pct[i]*100, blocks[i].name)) print("Memory Allocation for Weight Gradient:") for i in range(len(block_idxs)): print('{:7.3f} MBs ({:7.3f}%) for {:<40}'.format(weight_grad_nbytes[i]/1000000, weight_grad_nbytes_pct[i]*100, blocks[i].name))
Benchmark:
FullyConnected¶
We’ll create a network using
nn.Dense and benchmark the training.
net = mx.gluon.nn.Sequential() net.add( mx.gluon.nn.Dense(in_units=num_features, units=128), mx.gluon.nn.Activation('sigmoid'), mx.gluon.nn.Dense(in_units=128, units=8), mx.gluon.nn.Activation('sigmoid'), mx.gluon.nn.Dense32 ms +- 3.47 ms per loop (mean +- std. dev. of 7 runs, 1 loop each)
We can see the first FullyConnected operator takes a significant proportion of time to execute (~25% of the iteration) because there are 1,000,000 input features (to 128). After this, the other FullyConnected operators are much faster because they have input features of 128 (to 8) and 8 (to 1). On the backward pass, we see the same pattern (but
in reverse). And finally, the parameter update step takes a large amount of time on the weight matrix of the first
FullyConnected
Block. When checking the memory allocations below, we can see the weight matrix of the first
FullyConnected
Block is responsible for 99.999% of the memory compared to other FullyConnected weight matrices.
print_memory_allocation(net, block_idxs=[0, 2, 4])
Memory Allocation for Weight: 512.000 MBs ( 99.999%) for dense0 0.004 MBs ( 0.001%) for dense1 0.000 MBs ( 0.000%) for dense2 Memory Allocation for Weight Gradient: 512.000 MBs ( 99.999%) for dense0 0.004 MBs ( 0.001%) for dense1 0.000 MBs ( 0.000%) for dense2
Benchmark:
FullyConnectedSparse¶
We will now switch the first layer from
FullyConnected to
FullyConnectedSparse.
net = mx.gluon.nn.Sequential() net.add( FullyConnectedSparse(in_units=num_features, units=128),28 ms +- 22.7 ms per loop (mean +- std. dev. of 7 runs, 1 loop each)
We see the forward pass of
dot and
add (equivalent to FullyConnected operator) is much faster now: 1.54ms vs 0.26ms. And this explains the reduction in overall time for the epoch. We didn’t gain any benefit on the backward pass or parameter updates though.
Our first weight matrix and its gradients still take up the same amount of memory as before.
print_memory_allocation(net, block_idxs=[0, 2, 4])
Memory Allocation for Weight: 512.000 MBs ( 99.999%) for fullyconnectedsparse0 0.004 MBs ( 0.001%) for fullyconnected0 0.000 MBs ( 0.000%) for fullyconnected1 Memory Allocation for Weight Gradient: 512.000 MBs ( 99.999%) for fullyconnectedsparse0 0.004 MBs ( 0.001%) for fullyconnected0 0.000 MBs ( 0.000%) for fullyconnected1
Benchmark:
FullyConnectedSparse with
grad_stype=row_sparse¶
One useful outcome of sparsity in our CSRNDArray input is that our gradients will be row sparse. We can exploit this fact to give us potentially huge memory savings and speed improvements. Creating our
weight parameter with shape
(units, in_units) and not transposing in the forward pass are important pre-requisite for obtaining row sparse gradients. Using
nn.Dense would have led to column sparse gradients which are not supported in MXNet. We previously had
grad_stype of the
weight parameter in the first layer set to
'default' so we were handling the gradient as a dense array. Switching this to
'row_sparse' can give us these potential improvements.
net = mx.gluon.nn.Sequential() net.add( FullyConnectedSparse(in_units=num_features, units=128, weight_grad_stype='row_sparse'),()
334 ms +- 16.9 ms per loop (mean +- std. dev. of 7 runs, 1 loop each)
We can see a huge reduction in the time taken for the backward pass and parameter update step: 3.99ms vs 0.18ms. And this reduces the overall time of the epoch significantly. Our gradient consumes a much smaller amount of memory and means only a subset of parameters need updating as part of the
sgd_update step. Some optimizers don’t support sparse gradients however, so reference the specific optimizer’s documentation for more details.
print_memory_allocation(net, block_idxs=[0, 2, 4])
Memory Allocation for Weight: 512.000 MBs ( 99.999%) for fullyconnectedsparse1 0.004 MBs ( 0.001%) for fullyconnected2 0.000 MBs ( 0.000%) for fullyconnected3 Memory Allocation for Weight Gradient: 0.059 MBs ( 93.490%) for fullyconnectedsparse1 0.004 MBs ( 6.460%) for fullyconnected2 0.000 MBs ( 0.050%) for fullyconnected3
Advanced: Sparse
weight¶
You can optimize this example further by setting the weight’s
stype to
'row_sparse', but whether
'row_sparse' weights make sense or not will depends on your specific task. See contrib.SparseEmbedding for an example of this.
Conclusion¶
As part of this tutorial, we learned how to write sparse data to disk in LibSVM format and load it back in sparse batches with the LibSVMIter. We learned how to improve the performance of Gluon’s nn.Dense on sparse arrays using
mx.nd.sparse. And lastly, we set
grad_stype to
'row_sparse' to reduce the size of the gradient and speed up the parameter
update step.
Recommended Next Steps¶
More detail on the CSRNDArray sparse array format can be found in this tutorial.
More detail on the RowSparseNDArray sparse array format can be found in this tutorial.
Users of the Module API can see a symbolic only example in this tutorial. | https://mxnet.apache.org/versions/1.7/api/python/docs/tutorials/packages/ndarray/sparse/train_gluon.html | CC-MAIN-2022-33 | refinedweb | 2,916 | 51.55 |
Building resilient search experiences with Workbox
This codelab shows you how to implement a resilient search experience with Workbox. The demo app it uses contains a search box that calls a server endpoint, and redirects the user to a basic HTML page.
This codelab uses Chrome DevTools. Download Chrome if you don't already have it.
Measure
Before adding optimizations, it's always a good idea to first analyze the current state of the application.
- Click Remix to Edit to make the project editable.
- To preview the site, press View App. Then press Fullscreen
.
In the new tab that just opened, check how the website behaves when going offline:
Control+Shift+J(or
Command+Option+Jon Mac) to open DevTools.
- Click the Network tab.
- Open Chrome DevTools and select the Network panel.
- In the Throttling drop-down list, select Offline.
- In the demo app enter a search query, then click the Search button.
The standard browser error page is shown:
Provide a fallback response
The service worker contains the code to add the offline page to the precache list, so it can always be cached at the service worker
install event.
Usually you would need to instruct Workbox to add this file to the precache list at build time, by integrating the library with your build tool of choice (e.g. webpack or gulp).
For simplicity, we've already done it for you. The following code at
public/sw.js does that:
const FALLBACK_HTML_URL = ‘/index_offline.html’;
…
workbox.precaching.precacheAndRoute([FALLBACK_HTML_URL]);
To learn more about how to integrate Workbox with build tools, check out the webpack Workbox plugin and the Gulp Workbox plugin.
Next, add code to use the offline page as a fallback response:
- To view the source, press View Source.
- Add the following code to the bottom of
public/sw.js:
workbox.routing.setDefaultHandler(new workbox.strategies.NetworkOnly());
workbox.routing.setCatchHandler(({event}) => {
switch (event.request.destination) {
case 'document':
return caches.match(FALLBACK_HTML_URL);
break;
default:
return Response.error();
}
});
The Glitch UI says
workbox is not defined because it doesn't realize that the
importScripts() call on line 1 is importing the library.
The code does the following:
- Defines a default Network Only strategy that will apply to all requests.
- Declares a global error handler, by calling
workbox.routing.setCatchHandler()to manage failed requests. When requests are for documents, a fallback offline HTML page will be returned.
To test this functionality:
- Go back to the other tab that is running your app.
- Set the Throttling drop-down list back to Online.
- Press Chrome's Back button to navigate back to the search page.
- Make sure that the Disable cache checkbox in DevTools is disabled.
- Long-press Chrome's Reload button and select Empty cache and hard reload to ensure that your service worker is updated.
- Set the Throttling drop-down list back to Offline again.
- Enter a search query, and click the Search button again.
The fallback HTML page is shown:
Request notification permission
For simplicity, the offline page at
views/index_offline.html already contains the code to request notification permissions in a script block at the bottom:
function requestNotificationPermission(event) {
event.preventDefault();
Notification.requestPermission().then(function (result) {
showOfflineText(result);
});
}
The code does the following:
- When the user clicks subscribe to notifications the
requestNotificationPermission()function is called, which calls
Notification.requestPermission(), to show the default browser permission prompt. The promise resolves with the permission picked by the user, which can be either
granted,
denied, or
default.
- Passes the resolved permission to
showOfflineText()to show the appropriate text to the user.
Persist offline queries and retry when back online
Next, implement Workbox Background Sync to persist offline queries, so they can be retried when the browser detects that connectivity has returned.
- Open
public/sw.jsfor edit.
- Add the following code at the end of the file: code does the following:
workbox.backgroundSync.Plugincontains the logic to add failed requests to a queue so they can be retried later. These requests will be persisted in IndexedDB.
maxRetentionTimeindicates the amount of time a request may be retried. In this case we have chosen 60 minutes (after which it will be discarded).
onSyncis the most important part of this code. This callback will be called when connection is back so that queued requests are retrieved and then fetched from the network.
- The network response is added to the
offline-search-responsescache, appending the
¬ification=truequery param, so that this cache entry can be picked up when a user clicks on the notification.
To integrate background sync with your service, define a NetworkOnly strategy for requests to the search URL (
/search_action) and pass the previously defined
bgSyncPlugin. Add the following code to the bottom of
public/sw.js:
const matchSearchUrl = ({url}) => {
const notificationParam = url.searchParams.get('notification');
return url.pathname === '/search_action' && !(notificationParam === 'true');
};
workbox.routing.registerRoute(
matchSearchUrl,
new workbox.strategies.NetworkOnly({
plugins: [bgSyncPlugin],
}),
);
This tells Workbox to always go to the network, and, when requests fail, use the background sync logic.
Next, add the following code to the bottom of
public/sw.js to define a caching strategy for requests coming from notifications. Use a CacheFirst strategy, so they can be served from the cache.
const matchNotificationUrl = ({url}) => {
const notificationParam = url.searchParams.get('notification');
return (url.pathname === '/search_action' && (notificationParam === 'true'));
};
workbox.routing.registerRoute(matchNotificationUrl,
new workbox.strategies.CacheFirst({
cacheName: 'offline-search-responses',
})
);
Finally, add the code to show notifications:
function showNotification(notificationUrl) {
if (Notification.permission) {
self.registration.showNotification('Your search is ready!', {
body: 'Click to see you search result',
icon: '/img/workbox.jpg',
data: {
url: notificationUrl
}
});
}
}
self.addEventListener('notificationclick', function(event) {
event.notification.close();
event.waitUntil(
clients.openWindow(event.notification.data.url)
);
});
Test the feature
- Go back to the other tab that is running your app.
- Set the Throttling drop-down list back to Online.
- Press Chrome's Back button to navigate back to the search page.
- Long-press Chrome's Reload button and select Empty cache and hard reload to ensure that your service worker is updated.
- Set the Throttling drop-down list back to Offline again.
- Enter a search query, and click the Search button again.
- Click subscribe to notifications.
- When Chrome asks you if you want to grant the app permission to send notifications, click Allow.
- Enter another search query and click the Search button again.
- Set the Throttling drop-down list back to Online again.
Once the connection is back a notification will be shown:
Conclusion
Workbox provides many built-in features to make your PWAs more resilient and engaging. In this codelab you have explored how to implement the Background Sync API by way of the Workbox abstraction, to ensure that offline user queries are not lost, and can be retried once connection is back. The demo is a simple search app, but you can use a similar implementation for more complex scenarios and use cases, including chat apps, posting messages on a social network, etc. | https://web.dev/en/codelab-building-resilient-search-experiences/ | CC-MAIN-2020-29 | refinedweb | 1,145 | 50.43 |
DynamicMethod Constructor (String, Type, Type[])
Initializes an anonymously hosted dynamic method, specifying the method name, return type, and parameter types.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- name
- Type: System.String
The name of the dynamic method. This can be a zero-length string, but it cannot be null.
-..
This constructor specifies that just-in-time (JIT) visibility checks will be enforced for the Microsoft intermediate language (MSIL) of the dynamic method. That is, the code in the dynamic method has access to public methods of public classes. Exceptions are thrown if the method tries to access types or members that are private, protected, or internal (Friend in Visual Basic). To create a dynamic method that has restricted ability to skip JIT visibility checks, use the DynamicMethod(String, Type, Type[], Boolean) constructor.
When an anonymously hosted dynamic method is constructed, the call stack of the emitting assembly is included. When the method is invoked, the permissions of the emitting assembly.
Available since 2.0
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.1 | https://msdn.microsoft.com/en-us/library/bb360425(v=vs.110) | CC-MAIN-2017-04 | refinedweb | 183 | 51.24 |
view raw
I have a really simple program to remove the first level of brackets in a string, but it seems not be working. There seems to be a problem with my logic but I can't spot it.
Given a string like
AB(CCDC)((EF)G)H
ABCCDC(EF)GH
ABCCDC((EF))GH
def removeBrackets(string):
level = 0
list1 = list(string)
poses = []
for i in range(len(list1)):
print level, list1[i]
if level == 0 and list1[i] == '(':
print "skipping!"
level += 1
continue
elif level > 0 and list1[i] == '(':
poses.append(list1[i])
level += 1
elif level == 1 and list1[i] == ')':
print "skipping!"
level -= 1
continue
elif level > 0 and list1[i] == ')':
poses.append(list1[i])
level -= 1
print "adding " + list1[i] + "!"
poses.append(list1[i])
result = ""
for i in poses:
result += i
return result
skipping!
adding!
elif level > 0 and list1[i] == '(': poses.append(list1[i]) level += 1 ... elif level > 0 and list1[i] == ')': poses.append(list1[i]) level -= 1 ... print "adding " + list1[i] + "!" poses.append(list1[i])
The first two appends are redundant given the last one. They cause duplicate sets of parentheses to be added.
For what it's worth, you could simplify your function a bit.
list1 isn't needed. You can work with a string the same way you'd work with a list.
Instead of enumerating over the indices of
string, you could iterate over
string directly with
for ch in string. That gets rid of the numerous
list1[i] lookups. (If you really do want to know the index of each character, try
for i, ch in enumerate(string).
Some redundancy is removed if you split up the character checks from the level ones. If you see a parenthesis you modify
level either way, so best to write
level += 1 and
level -= 1 just once.
poses is an odd name for a variable. How about
parts?
The last loop to create a string result can be replaced with
''.join(parts). This is a funny-looking, but common, Python idiom.
Result:
def removeBrackets(string): level = 0 parts = [] for ch in string: print level, ch if ch == '(': level += 1 if level == 1: print "skipping!" continue elif ch == ')': level -= 1 if level == 0: print "skipping!" continue print "adding " + ch + "!" parts.append(ch) return ''.join(parts) | https://codedump.io/share/PGvHvcDJvO8A/1/series-of-conditionals-in-function-not-working-properly | CC-MAIN-2017-22 | refinedweb | 380 | 77.23 |
Prev
Java Set Experts Index
Headers
Your browser does not support iframes.
Re: Locking objects in an array
From:
Daniel Pitts <newsgroup.spamfilter@virtualinfinity.net>
Newsgroups:
comp.lang.java.programmer
Date:
Tue, 05 May 2009 14:04:25 -0700
Message-ID:
<TR1Ml.11905$WT7.11315@newsfe11.iad>
Philipp wrote:
Hello,
I've come accross a threading problem for which I can't find a nice
solution. (SSCCP at end of post)
I have a bidimensional array of objects (view it as a 2D lattice). I
want to make atomic operations on random square 2x2 regions of the
lattice. Thus I want to lock the 4 objects of the region, then perform
the change, then unlock those objects. Several threads should be
allowed to work on the array at the same time, if each thread is
accessing a 2x2 region which does not overlap with that of another,
they should be capable of doing the work in a parallel way.
How should I design the code to achieve this?
My solution (which may be misguided) so far is that, each object of
the lattice contains a Lock and has a lock() and unlock() method which
delegate to the lock.
So the approach is basically:
1. call lock() on each of the 4 objects (always in the same order)
2. make change
3. call unlock() on each of the 4 objects
What I don't like about this, is that
a) lock and unlock really have nothing to do in the API of the objects
in the lattice. Would it be possible to achieve the same result
without using an explicit Lock (thus no lock() method), but using the
Java synchronized() mechanism?
b) if an exception is thrown anywhere, eg. where only a part of the
lattice region has been locked, the cleanup is ugly because you can't
test if a lock is presently locked. You have to rely on unlock() to
throw.
c) An exception may leave the lattice in an inconsistent state.
Here is the design I would consider:
Note, this is *completely* untested. It is also just a first iteration.
Replacing "boolean[][] locks" with a BitSet may be advantageous,
depending on the size of your data set.
If you really want to ensure exceptions don't leave the lattice in bad
shape, then you might make the objects themselves immutable, and have
Opeartion.operate return new objects, and have those be replaced into
the array upon success.
package latice;
import java.util.Arrays;
/**
* @author Daniel Pitts
*/
public class Latice<T> {
public static interface Operator<T> {
void operate(T[][] data);
}
private final T[][] data;
private final boolean[][] locks;
public Latice(int width, int height) {
data = allocate(width, height);
locks = new boolean[height][width];
}
public boolean operateOnRegion(int x, int y, int width, int height,
Operator<T> operator, long timeout) throws InterruptedException {
if (!lockRegion(x, y, width, height, timeout)) {
return false;
}
try {
operator.operate(getRegion(x, y, width, height));
return true;
} finally {
unlockRegion(x, y, width, height);
}
}
private void unlockRegion(int x, int y, int width, int height) {
synchronized (locks) {
setLockValue(x, y, width, height, false);
locks.notifyAll();
}
}
private void setLockValue(int x, int y, int width, int height,
boolean lockValue) {
for (int i = 0; i < height; ++i) {
Arrays.fill(locks[y + i], x, x+width, lockValue);
}
}
private T[][] getRegion(int x, int y, int width, int height) {
T[][] region = allocate(width, height);
for (int i = 0; i < height; ++i) {
System.arraycopy(data[i+y], x, region[i], 0, width);
}
return region;
}
private static <T> T[][] allocate(int width, int height) {
@SuppressWarnings({"unchecked", "UnnecessaryLocalVariable"})
final T[][] genericsBroken = (T[][]) new Object[height][width];
return genericsBroken;
}
private boolean lockRegion(int x, int y, int width, int height,
long timeout) throws InterruptedException {
final long endTime = System.currentTimeMillis() + timeout;
synchronized (locks) {
do {
final long timeNow = System.currentTimeMillis();
if (checkLocks(x, y, width, height)) {
break;
}
if (timeout == 0) {
locks.wait();
} else if (timeNow < endTime) {
locks.wait(endTime - timeNow);
} else {
return false;
}
} while (true);
setLockValue(x, y, width, height, true);
}
return true;
}
private boolean checkLocks(int x, int y, int width, int height) {
for (int j = 0; j < height; ++j) {
final boolean[] row = locks[y + j];
for (int i = 0; i < width; ++i) {
if (row[x+i]) {
return false;
}
}
}
return true;
}
}
--
Daniel Pitts' Tech Blog: <> | https://preciseinfo.org/Convert/Articles_Java/Set_Experts/Java-Set-Experts-090506000425.html | CC-MAIN-2022-27 | refinedweb | 709 | 61.87 |
Python Iterator Tutorial
Iterators are the omnipresent spirits of Python. They are everywhere and you must have come across them in some program or another. Iterators are objects that allow you to traverse through all the elements of a collection, regardless of its specific implementation.
That means that, if you have ever used loops to iterate or run through the values in a container, you have used an iterator.
In this post, you will learn more about Python iterators. More specifically, you will
- First you'll see Iterators in detail to really understand what they are about and when you should use them.
- Then, you will see what Iterables are, since there is an important difference between the two!
- Next, you'll learn about Containers and how they use the concept of iterators.
- You will then see the Itertools Module in action.
- Finally, you will see Generators and learn about generator expressions, which is basically generator comprehension.
Be sure to check out DataCamp's two-part Python Data Science ToolBox course. The second part will work you through iterators, loops and list comprehension. It is followed by a case study in which you will apply all of the techniques you learned in the course: part 1 and part 2 combined.
Now, let's dive into iterators.....
Iterators
An iterator is an object that implements the iterator protocol (don't panic!). An iterator protocol is nothing but a specific class in Python which further has the
__next()__ method. Which means every time you ask for the next value, an iterator knows how to compute it. It keeps information about the current state of the iterable it is working on. The iterator calls the next value when you call
next() on it. Any object that has a
__next__() method is therefore an iterator.
Iterators help to produce cleaner looking code because they allows us to work with infinite sequences without having to reallocate resources for every possible sequence, thus also saving resource space. Python has several built-in objects, which implement the iterator protocol and you must have seen some of these before: lists, tuples, strings, dictionaries and even files. There are also many iterators in Python, all of the
itertools functions return iterators. You will see what
itertools are later on in this tutorial.
Iterables
An iterable is any object, not necessarily a data structure that can return an iterator. Its main purpose is to return all of its elements. Iterables can represent finite as well as infinite source of data. An iterable will directly or indirectly define two methods: the
__iter__() method, which must return the iterator object and the
__next()__ method with the help of the iterator it calls.
Note: Often the iterable classes will implement both
__iter__() and
__next__() in the same class, and have
__iter__() return self, which makes the
_iterable_ class both an iterable and its own iterator. It's perfectly fine to return a different object as the iterator, though.
There is an important difference between an iterable and an iterator. Let's see this with an example:
a_set = {1, 2, 3} b_iterator = iter(a_set) next(b_iterator)
type(a_set)
type(b_iterator)
In the example,
a_set is an iterable (a set) whereas
b_iterator is an iterator. They are both different data types in Python.
Wondering how an iterator works internally to produce the next sequence when asked? Let's build an iterator that returns a series of number:
class Series(object): def __init__(self, low, high): self.current = low self.high = high def __iter__(self): return self def __next__(self): if self.current > self.high: raise StopIteration else: self.current += 1 return self.current - 1 n_list = Series(1,10) print(list(n_list))
__iter__ returns the iterator object itself and the
__next__ method returns the next value from the iterator. If there is no more items to return then it raises a
StopIteration exception.
It is perfectly fine if you cannot write the code for an iterator yourself at this moment, but it is important that you grasp the basic concept behind it. You will see generators later on in the tutorial, which is a much easier way of implementing iterators.
Containers
Containers are the objects that hold data values. They support membership tests, which means you can check if a value exists in the container. Containers are iterables - lists, sets, dictionary, tuple and strings are all containers. But there are other iterables as well like open files and open sockets. You can perform membership tests on the containers:
if 1 in [1,2,3]: print('List') if 4 not in {1,2,3}: print('Tuple') if 'apple' in 'pineapple': print('String') #string contains all its substrings
List Tuple String
Itertools Module
Itertools is an built-in Python module that contains functions to create iterators for efficient looping. In short, it provides a lot of interesting tools to work with iterators! Some keep providing values for an infinite range, hence they should only be accessed by functions or loops that actually stop calling for more values eventually.
Let's check out some cool things that you can do with the
count function from the
itertools module:
from itertools import count sequence = count(start=0, step=1) while(next(sequence) <= 10): print(next(sequence))
1 3 5 7 9 11
from itertools import cycle dessert = cycle(['Icecream','Cake']) count = 0 while(count != 4): print('Q. What do we have for dessert? A: ' + next(dessert)) count+=1
Q. What do we have for dessert? A: Icecream Q. What do we have for dessert? A: Cake Q. What do we have for dessert? A: Icecream Q. What do we have for dessert? A: Cake
You can learn more in depth about the
itertools here.
Generators
The generator is the elegant brother of iterator that allows you to write iterators like the one you saw earlier, but in a much easier syntax where you do not have to write classes with
__iter__() and
__next__() methods.
Remember the example that you saw earlier with the iterator? Let's try to rewrite the code but using the concept of generators:
def series_generator(low, high): while low <= high: yield low low += 1 n_list = [] for num in series_generator(1,10): n_list.append(num) print(n_list)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The magic word with generators is
yield. There is no
return statement in the function
series_generator. The return value of the function will actually be a generator. Inside the while loop when the execution reaches the yield statement, the value of low is returned and the generator state is suspended. During the second next call, the generator resumes from the value at which it stopped earlier and increases this value by one. It continues with the while loop and comes to the
yield statement again.
yield basically replaces the return statement of a function but rather provides a result to its caller without destroying local variables. Thus, in the next iteration, it can work on this local variable value again. So unlike a normal function that you have seen before, where on each call it starts with new set of variables - a generator will resume the execution where it was left off.
Tip: lazy factory is a concept behind the generator and the iterator. Which means they are idle until you ask it for a value. Only when asked is when they get to work and produce a single value, after which it turns idle again. This is a good approach to work with lots of data. If you do not require all the data at once and hence no need to load all the data in the memory, you can use a generator or an iterator which will pass you each piece of data at a time.
Types of Generators
Generators can be of two types in Python: generator functions and generator expressions.
A generator function is any function in which the keyword
yield appears in the body. You have already seen an example of this with the
series_generator function. Which means the appearance of the keyword
yield is enough to make the function a generator function.
The generator expressions are the generator equivalent of a list comprehension. They can be specially useful for a limited use case. Just like a list comprehension returns a list, a generator expressions will return a generator.
Let's see what this means:
squares = (x * x for x in range(1,10)) print(type(squares)) print(list(squares))
<class 'generator'> [1, 4, 9, 16, 25, 36, 49, 64, 81]
Generators are incredibly powerful. They are more memory and CPU efficient and allow you to write code with fewer intermediate variables and data structures. They tend to require fewer lines of code and their usage makes the code easier to read and understand. That's why it's important that you try to use generators in your code as much as possible.
Where can you insert generators in your code? Tip: find places in your code where you do the following:
def some_function(): result = [] for ... in ...: result.append(x) return result
And replace it with:
def iterate_over(): for ... in ...: yield x
Well done, Pythonista!
Iterators are a powerful and useful tool in Python. However, not everyone is very familiar with the nitty-gritty details of it. Congrats on making it to the end of this tutorial!
Head over to DataCamp's Intermediate Python for Data Science course. This tutorial works with
matplotlib's functions that you can use to visualize real data. You will also learn about new data structures such as the dictionary and the Pandas DataFrame. You have already seen how iterator is used under the hood... head over to learn more about logic, control flow and loops in Python. | https://www.datacamp.com/community/tutorials/python-iterator-tutorial | CC-MAIN-2018-30 | refinedweb | 1,630 | 64.3 |
JVM Typed Functions
Previously on haxe.org
It's been about a week since I've announced the Haxe 4.1.0 release and made the bold claim that the JVM target went from being experimental to being the fastest Haxe target. Just as planned, this got the attention of various people, among which was Hugh. He flipped some switch in hxcpp and this happened:
I'll give the disclaimer that this is just a benchmark and not indicative of real performance improvements. Hugh himself told me that this particular benchmark was pathological for hxcpp due to garbage collection specifics. Nevertheless, any improvement helps and I'll happily abuse graphs like this for marketing purposes! So: "Allocations on hxcpp are 5x faster now! Haxe is amazing!"
Speaking of things being faster, after my last post we got some questions about what that HashLink Immix thing was about. It's a new garbage collection implementation which Aurel Bily, formerly known as Aurel Bily the Intern, is currently working on. He gives some information about it in this post. Naturally, I have a benchmark graph to tout:
In summary, the optimization game is on and I look forward to future developments in that regard. We will also make an effort to not only pay attention to our "pet targets", but also the more obscure ones, like the JavaScript target. Ah, I'm joking! JavaScript does very well in the benchmarks. Feel free to check it out, and please let us know if you have a suggestion for more benchmarks.
JVM Typed Functions
Now for what I actually want to talk about today. At least one person on reddit was interested in what we changed to improve the performance of anonymous functions and closures, so I'm happy to go into detail. First of all, let's establish some context of what we're actually talking about here:
function callMe(f:Int->Void) { f(12); } function main() { callMe(value -> trace(value)); }
(Note that I'm using module-level statics for these examples to reduce some noise. That's right, thanks to nadako's work we now got support for that on our development branch! If you're interested, grab a nightly build and check it out!)
The important part here is that with the
f(12) call, we don't know what exactly we are calling. It could be a static function, or a method closure on some instance, or an anonymous function, or perhaps something else. Even worse, we don't necessarily know the exact signature of what we're calling. The call-site thinks that it's a function which accept a single argument of type
Int and returns nothing. However, Haxe's type system allows assigning functions with a different type to this:
// broader argument type callMe((value:Float) -> trace(value)); // broadest argument type callMe((value:Dynamic) -> trace(value)); // non-Void return type callMe(function(value:Dynamic) return "foo");
Any call in the JVM run-time needs an exact type at bytecode-level. This is encoded as something called a descriptor. As an example, the
Int -> Void function from the example would have a
(I)V descriptor. We can verify this by looking at the decompiled bytecode of the
callMe function:
public static callMe(Lhaxe/jvm/Function;)V L0 LINENUMBER 2 L0 ALOAD 0 BIPUSH 12 INVOKEVIRTUAL haxe/jvm/Function.invoke (I)V RETURN
Don't worry if you aren't familiar with bytecode instructions. The main point here is that we have to call something that actually has the
(I)V descriptor. In our case, that something is the method
invoke on the class
haxe.jvm.Function.
haxe.jvm.Function
The base class for all typed functions is
haxe.jvm.Function. It utilizes information collected during compilation by generating a whole bunch of
invoke methods, which call other
invoke methods. This is done by classifying types as one of 9 possible classifications:
type signature_classification = | CBool | CByte | CChar | CShort | CInt | CLong | CFloat | CDouble | CObject
We can visualize the relation between these methods like a freeway with an on-ramp and an exit:
Ideally, we would always take the direct country road from the call-site to the method implementation. Unfortunately, that road can sometimes be a bit bumpy and our vehicle might not be equipped with appropriate tires (= types). This then requires us to take the Object Freeway, and find the exit ramp to our method. The Object Freeway consists of
invoke methods that take any number of
java.lang.Object arguments and also return
java.lang.Object.
The on-ramp
The on-ramp is implemented directly in
haxe.jvm.Function like so:
- We start with the concrete argument types, classified as per the classification mentioned above.
- If any of these argument types is not
CObject, call the
invokemethod with the same number of arguments where all argument types have been replaced with
java.lang.Object.
- Next, if our return type is not
CObject, call the
invokemethod with the same argument types where the return type has been replaced with
java.lang.Object.
The last
invoke method is part of the freeway: It is a method with a given number of
java.lang.Object arguments and a return type of
java.lang.Object.
The Object Freeway itself
In order to support optional arguments, the implementation has to make room for the possibility that a given call-site might not provide enough arguments to satisfy a given method descriptor. This becomes particularly relevant when working with
Reflect.callMethod, where users commonly expect that optional arguments can be omitted.
Default values in Haxe are not part of the call-site, but are implemented in the functions itself by checking something along the lines of
if (arg == null) arg = defaultValue. This has some advantages and disadvantages. A big advantage is that this can work even for call-sites that don't know what they are calling, such as
Reflect.callMethod. The only problem is that the call has to make sure that there are enough arguments. The compiler itself does that by appending
null values to the argument list, so dynamic call-sites have to do something similar.
This is achieved through the next step in our
invoke quest chain: If the arity (number of arguments) is less than the maximum assumed arity X, call the
invoke method that has an additional argument of type
java.lang.Object and use
null as the value. The number X is determined at compile-time by keeping track of the method that has the most arguments. These calls move us along the Object Freeway until we find an exit.
Here's how the Object Freeway parts of
haxe.jvm.Function look in the decompiled code for our small example:
public Object invoke() { return this.invoke((Object)null); } public Object invoke(Object arg0) { return this.invoke(arg0, (Object)null); } public Object invoke(Object arg0, Object arg1) { return null; }
The exit
So far, all
invoke methods were part of the base class
haxe.jvm.Function. We can get on the freeway, but we cannot leave it yet. The exit itself has to be implemented on our concrete method objects. Fortunately, this is quite straightforward: All we have to do is implement the
invoke method with the correct number of
java.lang.Object arguments and a
java.lang.Object return type. That method will then, through normal override resolution, be picked up once it is reached on the freeway. Of course it then has to de-objectify the arguments in order to call the real implementation.
Here is an example using a local function:
function callMe(f:Int->Void) { f(12); } function main() { function f(value:Int) { trace(value); } callMe(f); }
The local function
f is generated like this:
public static class Closure_main_0 extends Function { public static Main_Statics_.Closure_main_0 Main_Statics_$Closure_main_0 = new Main_Statics_.Closure_main_0(); Closure_main_0() { } public void invoke(int value) { Log.trace.invoke(value, new Anon0("_Main.Main_Statics_", "source/Main.hx", 7, "main")); } public void invoke(java.lang.Object arg0) { this.invoke(Jvm.toInt(arg0)); } public java.lang.Object invoke(java.lang.Object arg0) { this.invoke(arg0); return null; } }
The bottommost
invoke method is the one on the freeway and starts our exit ramp. It invokes the
invoke method above it, which has a
void return type (note that this looks like a self-call in the decompiled code, but the actual bytecode has a different descriptor
(Ljava/lang/Object;)V). Finally, that one then invokes our concrete implementation, which has our original types.
As mentioned initially, the bytecode at the call-site is
INVOKEVIRTUAL haxe/jvm/Function.invoke (I)V. This matches the descriptor of our concrete implementation
invoke method, which means that the actual call can be made without any overhead (that's the country road). In contrast, if the argument types were different, we would detour over the Object Freeway and eventually reach the concrete implementation, too. This way, we have a fast track for sanely typed code and a slower track for insanely typed code.
Native closures and calls with unknown arity
Reflect.callMethod
Our examples so far always knew the number of arguments and could generate
invoke calls accordingly. When using the aforementioned
Reflect.callMethod function, we have to dispatch to the correct
invoke method depending on the number of arguments provided. This is generated exactly as dumb as it sounds and is also part of
haxe.jvm.Function:
public Object invokeDynamic(Object[] args) { switch(args.length) { case 0: return this.invoke(); case 1: return this.invoke(args[0]); default: throw new IllegalArgumentException(); } }
The number of cases depends on the maximum determined arity mentioned before. This is crude, but effective.
Native closures
Finally, we have to deal with actual run-time closures in some way. This refers to cases where the compiler doesn't know that a closure of a field is taken, which can happen through the reflection API or liberal usage of the
Dynamic type. In cases like this we have to tie our elaborate
haxe.jvm.Function framework together with the reflection information we get from Java itself, which is mostly about
java.lang.reflect.Method.
This is handled by
haxe.jvm.Closure which carries an instance of
java.lang.reflect.Method and an optional context typed as
java.lang.Object. The just mentioned
invokeDynamic method is compatible with
Method.invoke, which is what we want to call. To that end,
Closure overrides
invokeDynamic in order to make that call. This requires some more work regarding function arguments, but that is uninteresting in the scope of this discussion.
That's already enough to support calls through
Reflect.callMethod as those invoke
invokeDynamic. However, as we demonstrated initially, our properly typed call-sites actually emit concrete
invoke calls. In order to connect those to
invokeDynamic,
Closure extends a generated class
ClosureDispatch which simply re-routes these calls:
public class ClosureDispatch extends Function { public ClosureDispatch() { } public Object invoke() { return this.invokeDynamic(new Object[0]); } public Object invoke(Object arg0) { return this.invokeDynamic(new Object[]{arg0}); } public Object invoke(Object arg0, Object arg1) { return this.invokeDynamic(new Object[]{arg0, arg1}); } }
Closing remarks
This approach is certainly not going to win any Java awards for exceptional elegance, but I won't get tired of mentioning how efficient it is. It is very nice to reward properly typed code with good performance while still supporting all the dark arts of dynamic typing and reflection. It might be possible to improve this further, too:
- Usage of the Object Freeway requires boxing of basic types. In theory, there could also be a Double Freeway to avoid that. However, it is unclear if there are scenarios where this could actually be relevant and significant.
- I'm not sure how well native interoperability works with all this. We briefly discussed the idea of providing one interface per invoke-signature, which is what some other Java generators do. The problem here is that we need a common base class in the API anyway, so the advantage of juggling individual interfaces over just extending
haxe.jvm.Functionand implementing your favorite
invokemethod seems dubious.
- Another idea was to automatically implement some java functional interfaces. In fact, I got started with that out of curiosity and then completely forgot about it. As of Haxe 4.1.0, only the interfaces for
Consumerand
BiConsumerare inferred automatically. It remains to be seen if this has any value.
Please let us know if you have any questions or comments! | https://haxe.org/blog/jvm-typed-functions/ | CC-MAIN-2022-33 | refinedweb | 2,072 | 56.35 |
11 December 2009 10:43 [Source: ICIS news]
DUBAI (ICIS news)--SABIC affiliate Yanbu National Petrochemical Co (Yansab) is expected to begin commercial production at its new high density polyethylene (HDPE) plant at Yanbu, Saudi Arabia, in January 2010, SABIC CEO Mohamed al-Mady said late on Thursday.
“The HDPE plant, which will produce bimodal pipe grade, is in trial production currently but is expected to achieve commercial output in January,” Al-Mady said on the sidelines of the 4th Gulf Petrochemicals and Chemicals Association (GPCA) forum in ?xml:namespace>
Obtaining the relevant certification for selling the bimodal pipe grade has been completed, Al-Mady said.
ICIS news had earlier reported that Yansab was expected to achieve commercial output at the new plant by the end of this year.
Al-Mady did not disclose a reason for the delay to the start-up.
The 1.3m tonne/year cracker and other downstream plants at the Yansab complex started up at the end of August.
The three-day GPCA forum ended on 10 December.
For more on polyethylene | http://www.icis.com/Articles/2009/12/11/9318412/gpca-09-yansab-hdpe-plant-to-achieve-commercial-output-in-10.html | CC-MAIN-2013-48 | refinedweb | 177 | 50.36 |
{-# LANGUAGE Rank2Types, BangPatterns #-} module Data.SouSiT.Sink ( Sink(..), SinkStatus(..), closeSink, -- * monadic functions input, inputOr, inputMap, inputMaybe, skip, -- * utility functions appendSink, (=||=), feedList, liftSink, -- * sink construction contSink, -> m (Sink i m r)) (m r) | Done (m r) instance Monad m => Functor (Sink i m) where fmap f (Sink st) = Sink (liftM mp st) where mp (Done r) = Done (liftM f r) mp (Cont nf cf) = Cont (liftM (fmap f) . nf) (liftM f cf) instance Monad m => Monad (Sink i m) where return a = doneSink $ return a (Sink st) >>= f = Sink (st >>= mp) where mp (Done r) = liftM f r >>= sinkStatus mp (Cont nf cf) = return $ Cont (liftM (>>=. Returns (Just a) for the element or Nothing if the sink is closed -- before the input was available. inputMaybe :: Monad m => Sink a m (Maybe a) inputMaybe = inputMap (return . Just) (return Nothing) -- | Reads the next element. Returns (Just a) for the element or Nothing if the sink is closed -- before the input was available. inputMap :: Monad m => (a -> m b) -> m b -> Sink a m b inputMap f = contSink' (doneSink . f) -- | -> m (Sink i m r) feedList [] !s = return s feedList (x:xs) !s = sinkStatus s >>= step where step (Done r) = return s step (Cont nf _) = nf x >>= feedList xs contSink :: Monad m => (i -> m (Sink i m r)) -> m r -> Sink i m r contSink next = Sink . return . Cont next contSink' :: Monad m => (i -> Sink i m r) -> m r -> Sink i m r contSink' next = contSink (return . step (return ()) where step i = process i >> return = open >>= flip step i step rs i = process rs i >> return (contSink process = contSink step (return Nothing) where step i = process i >>= cont cont Nothing = return $ maybeSink process cont result = return $ doneSink' nf' (t cf) where nf' i = liftM (liftSink t) (t $ nf i) | http://hackage.haskell.org/package/sousit-0.4/docs/src/Data-SouSiT-Sink.html | CC-MAIN-2016-44 | refinedweb | 297 | 73.92 |
I have heard about the:
char myline[100];
cin.getline(myline,100);
but when my coding is long it doesnt work i gues as long as i would like it to?
here is my code for example:
#include <iostream> using namespace std; int main() { int length; int width; cout << "Enter the length: "; cin >> length; cout << "Enter the width: "; cin >> width; cout << "The area is "; <---- This is the problem, when it gets here it just closes... cout << length*width; char myline[100]; <----- Does it have to do with this? cin.getline(myline,100); return 0; }
Yes i am extremely new to programming and i am trying to learn it, but there is a problem.
When i enter the two variables from my keyboard, and it gets to the part where it should say "The area is " and show my answer, the program just shuts off... Does this have to do with the char myline? Someone please help. | https://www.daniweb.com/programming/software-development/threads/128217/making-a-console-app-stay-up | CC-MAIN-2017-34 | refinedweb | 156 | 86.64 |
function plusone(a) { return (a+1); #doc returns: the argument plus one. }It's a simple trick; as soon as the lexer finds a '#doc' directive, it will scan the rest of the text until it finds the closing curly brace, at which point it will go on about its lexing business, storing the documentation in the meanwhile. Is there a problem with that ? Just one, namely if we want the closing curly brace to be in the text. For one, opening and closing curly braces within documentation are counted, so there's never a problem with this, for example:
{ somecode(); #doc why have a { } block here ? }Here the opening and closing curly braces in the sentence are matched and ignored. Only the next instance of a closing curly brace is seen as closing the documentation section. But can we denote a single, unmatched closing curly brace in documentation ? No. Not really. But we can hide it from showing up in the generated documentation, using a 'hidden'-tag, which will be shown later on.
Usage: ufydoc [options] -d <dir> -o <dir> -f <html|xml|pdf|man>
Example:
author: Mr Smith date: 2006-07-05
Callables have special identifiers to indicate callable-specific metadata. They are 'returns' and 'param<n>' and serve to illustrate, respectively, the return value of this particular callable, and whatever pops into your mind about its parameters, where <n> represents a number, from one upwards.
returns:
author:
date:
synopsis:
description:
seealso:
bugs:
h1:
h2:
h3:
h4:
hidden:
This is tag that can be used to hide text from the eventual output. Things like to-do lists, remarks to other programmers concerning documentation, or hiding the one, unbalanced opening curly brace from disturbing the ufy lexer can be placed here. | http://ufy.sourceforge.net/ufydoc.html | CC-MAIN-2017-17 | refinedweb | 292 | 58.11 |
OpenOCD User Guide
Contents
- 1 OpenOCD User’s Guide
- 2 About
- 3 1. OpenOCD Developer Resources
- 4 2. Debug Adapter Hardware
- 5 3. About Jim-Tcl
- 6 4. Running
- 7 5. OpenOCD Project Setup
- 8 6. Config File Guidelines
- 8.1 6.1 Interface Config Files
- 8.2 6.2 Board Config Files
- 8.3 6.3 Target Config Files
- 8.3.1 6.3.1 Default Value Boiler Plate Code
- 8.3.2 6.3.2 Adding TAPs to the Scan Chain
- 8.3.3 6.3.3 Add CPU targets
- 8.3.4 6.3.4 Define CPU targets working in SMP
- 8.3.5 6.3.5 Chip Reset Setup
- 8.3.6 6.3.6 The init_targets procedure
- 8.3.7 6.3.7 ARM Core Specific Hacks
- 8.3.8 6.3.8 Internal Flash Configuration
- 8.4 6.4 Translating Configuration Files
- 9 7. Daemon Configuration
- 10 8. Debug Adapter Configuration
- 11 9. Reset Configuration
- 12 10. TAP Declaration
- 13 11. CPU Configuration
- 14 12. Flash Commands
- 14.1 12.1 Flash Configuration Commands
- 14.2 12.2 Erasing, Reading, Writing to Flash
- 14.3 12.3 Other Flash commands
- 14.4 12.4 Flash Driver List
- 14.5 12.5 mFlash
- 15 13. NAND Flash Commands
- 16 14. PLD/FPGA Commands
- 17 15. General Commands
- 18 16. Architecture and Core Commands
- 18.1 16.1 ARM Hardware Tracing
- 18.2 16.2 Generic ARM
- 18.3 16.3 ARMv4 and ARMv5 Architecture
- 18.4 16.4 ARMv6 Architecture
- 18.5 16.5 ARMv7 Architecture
- 18.6 16.6 Software Debug Messages and Tracing
- 19 17. JTAG Commands
- 20 18. Boundary Scan Commands
- 21 19. TFTP
- 22 20. GDB and OpenOCD
- 23 21. Tcl Scripting API
- 24 22. FAQ
- 25 23. Tcl Crash Course
- 25.1 23.1 Tcl Rule #1
- 25.2 23.2 Tcl Rule #1b
- 25.3 23.3 Per Rule #1 - All Results are strings
- 25.4 23.4 Tcl Quoting Operators
- 25.5 23.5 Consequences of Rule 1/2/3/4
- 25.6 23.6 OpenOCD Tcl Usage
- 25.7 23.7 Other Tcl Hacks
- 26 OpenOCD Concept Index
- 27 Command and Driver Index
- 28 Footnotes
- 29 Table of Contents
- 30 Short Table of Contents
- 31 About This Document
OpenOCD User’s Guide
This User’s Guide documents release No value for VERSION, dated No value for UPDATED, of the Open On-Chip Debugger (OpenOC “GNU Free Documentation Licenseâ€.
About
OpenOCD was created by Dominic Rath as part of a diploma thesis written at the University of Applied Sciences Augsburg (). Since that time, the project has grown into an active open-source project, supported by a diverse community of software and hardware developers from around the world.
0.1 What is OpenOCD?
The Open On-Chip Debugger (OpenOCD) aims to provide debugging, in-system programming and boundary-scan testing for embedded target devices.
It does so with the assistance of a debug adapter, which is a small hardware module which helps provide the right kind of electrical signaling to the target being debugged. These are required since the debug host (on which OpenOCD runs) won’t usually have native support for such signaling, or the connector needed to hook up to the target.
Such debug adapters support one or more transport protocols, each of which involves different electrical signaling (and uses different messaging protocols on top of that signaling). There are many types of debug adapter, and little uniformity in what they are called. (There are also product naming differences.)
These adapters are sometimes packaged as discrete dongles, which may generically be called hardware interface dongles. Some development boards also integrate them directly, which may let the development board can be directly connected to the debug host over USB (and sometimes also to power it over USB).
For example, a JTAG Adapter supports JTAG signaling, and is used to communicate with JTAG (IEEE 1149.1) compliant TAPs on your target board. A TAP is a “Test Access Portâ€, a module which processes special instructions and data. TAPs are daisy-chained within and between chips and boards. JTAG supports debugging and boundary scan operations.
There are also SWD Adapters that support Serial Wire Debug (SWD) signaling to communicate with some newer ARM cores, as well as debug adapters which support both JTAG and SWD transports. SWD only supports debugging, whereas JTAG also supports boundary scan operations.
For some chips, there are also Programming Adapters supporting special transports used only to write code to flash memory, without support for on-chip debugging or boundary scan. (At this writing, OpenOCD does not support such non-debug adapters.)
Dongles: OpenOCD currently supports many types of hardware dongles: USB based, parallel port based, and other standalone boxes that run OpenOCD internally. See section [#Debug-Adapter-Hardware Debug Adapter Hardware].
GDB Debug: It allows ARM7 (ARM7TDMI and ARM720t), ARM9 (ARM920T, ARM922T, ARM926EJ–S, ARM966E–S), XScale (PXA25x, IXP42x) and Cortex-M3 (Stellaris LM3 and ST STM32) based cores to be debugged via the GDB protocol.
Flash Programing: Flash writing is supported for external CFI compatible NOR flashes (Intel and AMD/Spansion command set) and several internal flashes (LPC1700, LPC2000, AT91SAM7, AT91SAM3U, STR7x, STR9x, LM3, and STM32x). Preliminary support for various NAND flash controllers (LPC3180, Orion, S3C24xx, more) controller is included.
0.2 OpenOCD Web Site
The OpenOCD web site provides the latest public news from the community:
0.3 Latest User’s Guide:
The user’s guide you are now reading may not be the latest one available. A version for more recent code may be available. Its HTML form is published irregularly at:
PDF form is likewise published at:
0.4 OpenOCD User’s Forum
There is an OpenOCD forum (phpBB) hosted by SparkFun, which might be helpful to you. Note that if you want anything to come to the attention of developers, you should post it to the OpenOCD Developer Mailing List instead of this forum.
1. OpenOCD Developer Resources
If you are interested in improving the state of OpenOCD’s debugging and testing support, new contributions will be welcome. Motivated developers can produce new target, flash or interface drivers, improve the documentation, as well as more conventional bug fixes and enhancements.
The resources in this chapter are available for developers wishing to explore or expand the OpenOCD source code.
1.1 OpenOCD GIT Repository
During the 0.3.x release cycle, OpenOCD switched from Subversion to a GIT repository hosted at SourceForge. The repository URL is:
git://openocd.git.sourceforge.net/gitroot/openocd/openocd
You may prefer to use a mirror and the HTTP protocol:
With standard GIT tools, use
git clone to initialize a local repository, and
git pull to update it. There are also gitweb pages letting you browse the repository with a web browser, or download arbitrary snapshots without needing a GIT client:
The ‘README’ file contains the instructions for building the project from the repository or a snapshot.
Developers that want to contribute patches to the OpenOCD system are strongly encouraged to work against mainline. Patches created against older versions may require additional work from their submitter in order to be updated for newer releases.
1.2 Doxygen Developer Manual
During the 0.2.x release cycle, the OpenOCD project began providing a Doxygen reference manual. This document contains more technical information about the software internals, development processes, and similar documentation:
This document is a work-in-progress, but contributions would be welcome to fill in the gaps. All of the source files are provided in-tree, listed in the Doxyfile configuration in the top of the source tree.
1.3 OpenOCD Developer Mailing List
The OpenOCD Developer Mailing List provides the primary means of communication between developers:
Discuss and submit patches to this list. The ‘HACKING’ file contains basic information about how to prepare patches.
1.4 OpenOCD Bug Database
During the 0.4.x release cycle the OpenOCD project team began using Trac for its bug database:
2. Debug Adapter Hardware
Defined: dongle: A small device that plugins into a computer and serves as an adapter .... [snip]
In the OpenOCD case, this generally refers to a small adapter that attaches to your computer via USB or the Parallel Printer Port. One exception is the Zylin ZY1000, packaged as a small box you attach via an ethernet cable. The Zyl.
2.1 Choosing a Dongle
There are several things you should keep in mind when choosing a dongle.
- Transport Does it support the kind of communication that you need? OpenOCD focusses mostly on JTAG. Your version may also support other ways to communicate with target devices.
- Voltage What voltage is your target - 1.8, 2.8, 3.3, or 5V? Does your dongle support it? You might need a level converter.
- Pinout What pinout does your target board use? Does your dongle support it? You may be able to use jumper wires, or an "octopus" connector, to convert pinouts.
- Connection Does your computer have the USB, printer, or Ethernet port needed?
- RTCK Do you expect to use it with ARM chips and boards with RTCK support? Also known as “adaptive clockingâ€
2.2 Stand alone Systems
ZY1000 See: Technically, not a dongle, but a standalone box. The ZY1000 has the advantage that it does not require any drivers installed on the developer PC. It also has a built in web interface. It supports RTCK/RCLK or adaptive clocking and has a built in relay to power cycle targets remotely.
2.3 USB FT2232 Based
There are many USB JTAG dongles on the market, many of them are based on a chip from “Future Technology Devices International†(FTDI) known as the FTDI FT2232; this is a USB full speed (12 Mbps) chip. See: for more information. In summer 2009, USB high speed (480 Mbps) versions of these FTDI chips are starting to become available in JTAG adapters. (Adapters using those high speed FT2232.
- usbjtag
Link
- jtagkey
See:
- jtagkey2
See:
- oocdlink
See: By Joern Kaipf
- signalyzer
See:
- Stellaris Eval Boards
See: - The Stellaris eval boards bundle FT2232-based JTAG and SWD support, which can be used to debug the Stellaris chips. Using separate JTAG adapters is optional. These boards can also be used in a "pass through" mode as JTAG adapters to other target boards, disabling the Stellaris chip.
- Luminary ICDI
See: - Luminary In-Circuit Debug Interface (ICDI) Boards are included in Stellaris LM3S9B9x Evaluation Kits. Like the non-detachable FT2232 support on the other Stellaris eval boards, they can be used to debug other target boards.
- olimex-jtag
See:
- Flyswatter/Flyswatter2
See:
- turtelizer2
See: Turtelizer 2, or
- comstick
Link:
- stm32stick
Link
- axm0432_jtag
Axiom AXM-0432 Link - NOTE: This JTAG does not appear to be available anymore as of April 2012.
- cortino
Link
- dlp-usb1232h
Link
- digilent-hs1
Link
2.4 USB-JTAG / Altera USB-Blaster compatibles
These devices also show up as FTDI devices, but are not protocol-compatible with the FT2232 devices. They are, however, protocol-compatible among themselves. USB-JTAG devices typically consist of a FT245 followed by a CPLD that understands a particular protocol, or emulate this protocol using some other hardware.
They may appear under different USB VID/PID depending on the particular product. The driver can be configured to search for any VID/PID pair (see the section on driver commands).
- USB-JTAG Kolja Waschk’s USB Blaster-compatible adapter
Link:
- Altera USB-Blaster
Link:
2.5 USB JLINK based
There are several OEM versions of the Segger JLINK adapter. It is an example of a micro controller based JTAG adapter, it uses an AT91SAM764 internally.
- ATMEL SAMICE Only works with ATMEL chips!
Link:
- SEGGER JLINK
Link:
- IAR J-Link
Link:
2.6 USB RLINK based.
- Raisonance RLink
Link:
- STM32 Primer
Link:
- STM32 Primer2
Link:
2.7 USB ST-LINK based
ST Micro has an adapter called ST-LINK. They only work with ST Micro chips, notably STM32 and STM8.
- ST-LINK
This is available standalone and as part of some kits, eg. STM32VLDISCOVERY.
Link:
- ST-LINK/V2
This is available standalone and as part of some kits, eg. STM32F4DISCOVERY.
Link:
For info the original ST-LINK enumerates using the mass storage usb class, however it’s implementation is completely broken. The result is this causes issues under linux. The simplest solution is to get linux to ignore the ST-LINK using one of the following methods:
- modprobe -r usb-storage && modprobe usb-storage quirks=483:3744:i
- add "options usb-storage quirks=483:3744:i" to /etc/modprobe.conf
2.8 USB Other
- USBprog
Link: - which uses an Atmel MEGA32 and a UBN9604
- USB - Presto
Link:
- Versaloon-Link
Link:
- ARM-JTAG-EW
Link:
- Buspirate
Link:
2.9 IBM PC Parallel Printer Port Based
The two well known “JTAG Parallel Ports†cables are the Xilnx DLC5 and the Macraigor Wiggler. There are many clones and variations of these on the market.
Note that parallel ports are becoming much less common, so if you have the choice you should probably avoid these adapters in favor of USB-based ones.
- Wiggler - There are many clones of this.
Link:
- DLC5 - From XILINX - There are many clones of this
Link: Search the web for: “XILINX DLC5†- it is no longer produced, PDF schematics are easily found and it is easy to make.
- Amontec - JTAG Accelerator
Link:
- GW16402
Link:
- Wiggler2
Link:
- Wiggler_ntrst_inverted
Yet another variation - See the source code, src/jtag/parport.c
- old_amt_wiggler
Unknown - probably not on the market today
- arm-jtag
Link: Most likely [another wiggler clone]
- chameleon
Link:
- Triton
Unknown.
- Lattice
ispDownload from Lattice Semiconductor
- flashlink
From ST Microsystems;
Link:
2.10 Other...
- ep93xx
An EP93xx based Linux machine using the GPIO pins directly.
- at91rm9200
Like the EP93xx - but an ATMEL AT91RM9200 based solution using the GPIO pins on the chip.
3. About Jim-Tcl
OpenOCD uses a small “Tcl Interpreter†known as Jim-Tcl. This programming language provides a simple and extensible command interpreter.
All commands presented in this Guide are extensions to Jim-Tcl. You can use them as simple commands, without needing to learn much of anything about Tcl. Alternatively, can write Tcl programs with them.
You can learn more about Jim at its website,. There is an active and responsive community, get on the mailing list if you have any questions. Jim-Tcl maintainers also lurk on the OpenOCD mailing list.
- Jim vs. Tcl
Jim-Tcl is a stripped down version of the well known Tcl language, which can be found here:. Jim-Tcl has far fewer features. Jim-Tcl is several dozens of .C files and .H files and implements the basic Tcl command set. In contrast: Tcl 8.6 is a 4.2 MB .zip file containing 1540 files.
- Missing Features
Our practice has been: Add/clone the real Tcl feature if/when needed. We welcome Jim-Tcl improvements, not bloat. Also there are a large number of optional Jim-Tcl features that are not enabled in OpenOCD.
- Scripts
OpenOCD configuration scripts are Jim-Tcl Scripts. OpenOCD’s command interpreter today is a mixture of (newer) Jim-Tcl commands, and (older) the orginal command interpreter.
- Commands
At the OpenOCD telnet command line (or via the GDB monitor command) one can type a Tcl for() loop, set variables, etc. Some of the commands documented in this guide are implemented as Tcl scripts, from a ‘startup.tcl’ file internal to the server.
- Historical Note
Jim-Tcl was introduced to OpenOCD in spring 2008. Fall 2010, before OpenOCD 0.5 release OpenOCD switched to using Jim Tcl as a git submodule, which greatly simplified upgrading Jim Tcl to benefit from new features and bugfixes in Jim Tcl.
- Need a crash course in Tcl?
See section [#Tcl-Crash-Course Tcl Crash Course].
4. Running
Properly installing OpenOCD sets up your operating system to grant it access to the debug adapters. On Linux, this usually involves installing a file in ‘/etc/udev/rules.d,’ so OpenOCD has permissions. MS-Windows needs complex and confusing driver configuration for every peripheral. Such issues are unique to each operating system, and are not detailed in this User’s Guide.
Then later you will invoke the OpenOCD server, with various options to tell it how each debug session should work. The ‘--help’’t give any ‘-f’ or ‘-c’ options, OpenOCD tries to read the configuration file ‘openocd.cfg’. To specify one or more different configuration files, use ‘-f’ options. For example:
Configuration files and scripts are searched for in
- the current directory,
- any search dir specified on the command line using the ‘-s’ option,
- any search dir specified using the
add_script_search_dircommand,
- ‘$HOME/.openocd’ (not on Windows),
- the site wide script library ‘$pkgdatadir/site’ and
- the OpenOCD-supplied script library ‘$pkgdatadir/scripts’.
The first found file with a matching file name will be used.
Note: Don’t try to use configuration script names or paths which include the "#" character. That character begins Tcl comments.
4.1 Simple setup, no customization
In the best case, you can use two scripts from one of the script libraries, hook up your JTAG adapter, and start the server ... and your JTAG setup will just work "out of the box". Always try to start by reusing those scripts, but assume you’ll need more customization even if this works. See section [#OpenOCD-Project-Setup OpenOCD Project Setup].
If you find a script for your JTAG adapter, and for your board or target, you may be able to hook up your JTAG adapter then start the server like:
You might also need to configure which reset signals are present, using ‘-c 'reset_config trst_and_srst'’ or something similar. If all goes well you’ll see output something like
Seeing that "tap/device found" message, and no warnings, means the JTAG communication is working. That’s a key milestone, but you’ll probably need more project-specific setup.
4.2 What OpenOCD does as it starts
OpenOCD starts by processing the configuration commands provided on the command line or, if there were no ‘-c command’ or ‘-f file.cfg’ options given, in ‘openocd.cfg’. See [#Configuration-Stage ‘-d’ option.
Also it is possible to interleave Jim-Tcl commands w/config scripts using the ‘-c’ command line switch.
To enable debug output (when reporting problems or working on OpenOCD itself), use the ‘-d’ command line switch. This sets the ‘debug_level’ to "3", outputting the most information, including debug messages. The default setting is "2", outputting only informational messages, warnings and errors. You can also change this setting from within a telnet or gdb session using
debug_level <n> (see [#debug_005flevel debug_level]).
You can redirect all output from the daemon to a file using the ‘-l <logfile>’.
5. OpenOCD Project Setup
To use OpenOCD with your development projects, you need to do more than just connecting the JTAG adapter hardware (dongle) to your development board and then starting the OpenOCD server. You also need to configure that server so that it knows about that adapter and board, and helps your work. You may also want to connect OpenOCD to GDB, possibly using Eclipse or some other GUI.
5.1 Hooking up the JTAG Adapter
Today’s most common case is a dongle with a JTAG cable on one side (such as a ribbon cable with a 10-pin or 20-pin IDC connector) and a USB cable on the other. Instead of USB, some cables use Ethernet; older ones may use a PC parallel port, or even a serial port.
- Start with power to your target board turned off, and nothing connected to your JTAG adapter. If you’re particularly paranoid, unplug power to the board. It’s important to have the ground signal properly set up, unless you are using a JTAG adapter which provides galvanic isolation between the target board and the debugging host.
- Be sure it’s the right kind of JTAG connector. If your dongle has a 20-pin ARM connector, you need some kind of adapter (or octopus, see below) to hook it up to boards using 14-pin or 10-pin connectors ... or to 20-pin connectors which don’t use ARM’s pinout.
In the same vein, make sure the voltage levels are compatible. Not all JTAG adapters have the level shifters needed to work with 1.2 Volt boards.
- Be certain the cable is properly oriented or you might damage your board. In most cases there are only two possible ways to connect the cable. Connect the JTAG cable from your adapter to the board. Be sure it’s firmly connected.
In the best case, the connector is keyed to physically prevent you from inserting it wrong. This is most often done using a slot on the board’s male connector housing, which must match a key on the JTAG cable’s female connector. If there’s no housing, then you must look carefully and make sure pin 1 on the cable hooks up to pin 1 on the board. Ribbon cables are frequently all grey except for a wire on one edge, which is red. The red wire is pin 1. Sometimes dongles provide cables where one end is an “octopus†of color coded single-wire connectors, instead of a connector block. These are great when converting from one JTAG pinout to another, but are tedious to set up. Use these with connector pinout diagrams to help you match up the adapter signals to the right board pins.
- Connect the adapter’s other end once the JTAG cable is connected. A USB, parallel, or serial port connector will go to the host which you are using to run OpenOCD. For Ethernet, consult the documentation and your network administrator.
For USB based JTAG adapters you have an easy sanity check at this point: does the host operating system see the JTAG adapter? If that host is an MS-Windows host, you’ll need to install a driver before OpenOCD works.
- Connect the adapter’s power supply, if needed. This step is primarily for non-USB adapters, but sometimes USB adapters need extra power.
- Power up the target board. Unless you just let the magic smoke escape, you’re now ready to set up the OpenOCD server so you can use JTAG to work with that board.
Talk with the OpenOCD server using telnet (
telnet localhost 4444 on many systems) or GDB. See section [#GDB-and-OpenOCD GDB and OpenOCD].
5.2 Project Directory
There are many ways you can configure OpenOCD and start it up.
A simple way to organize them all involves keeping a single directory for your work with a given board. When you start OpenOCD from that directory, it searches there first for configuration files, scripts, files accessed through semihosting, and for code you upload to the target board. It is also the natural place to write files, such as log files and data you download from the board.
5.3 Configuration Basics
There are two basic ways of configuring OpenOCD, and a variety of ways you can mix them. Think of the difference as just being how you start the server:
- Many ‘-f file’ or ‘-c command’ options on the command line
- No options, but a user config file in the current directory named ‘openocd.cfg’
Here is an example ‘openocd.cfg’ file for a setup using a Signalyzer FT2232-based JTAG adapter to talk to a board with an Atmel AT91SAM7X256 microcontroller:
Here is the command line equivalent of that configuration:
You could wrap such long command lines in shell scripts, each supporting a different development task. One might re-flash the board with a specific firmware version. Another might set up a particular debugging or run-time environment.
Important: At this writing (October 2009) the command line method has problems with how it treats variables. For example, after ‘-c "set VAR value"’, or doing the same in a script, the variable VAR will have no value that can be tested in a later script.
Here we will focus on the simpler solution: one user config file, including basic configuration plus any TCL procedures to simplify your work.
5.4 User Config Files
A user configuration file ties together all the parts of a project in one place. One of the following will match your situation best:
- Ideally almost everything comes from configuration files provided by someone else. For example, OpenOCD distributes a ‘scripts’ directory (probably in ‘/usr/share/openocd/scripts’ on Linux). Board and tool vendors can provide these too, as can individual user sites; the ‘-s’ command line option lets you say where to find these files. (See section [#Running Running].) The AT91SAM7X256 example above works this way.
Three main types of non-user configuration file each have their own subdirectory in the ‘scripts’ directory:
- interface – one for each different debug adapter;
- board – one for each different board
- target – the chips which integrate CPUs and other JTAG TAPs
Best case: include just two files, and they handle everything else. The first is an interface config file. The second is board-specific, and it sets up the JTAG TAPs and their GDB targets (by deferring to some ‘target.cfg’ file), declares all flash memory, and leaves you nothing to do except meet your deadline:
Boards with a single microcontroller often won’t need more than the target config file, as in the AT91SAM7X256 example. That’s because there is no external memory (flash, DDR RAM), and the board differences are encapsulated by application code.
- Maybe you don’t know yet what your board looks like to JTAG. Once you know the ‘interface.cfg’ file to use, you may need help from OpenOCD to discover what’s on the board. Once you find the JTAG TAPs, you can just search for appropriate target and board configuration files ... or write your own, from the bottom up. See [#Autoprobing Autoprobing].
- You can often reuse some standard config files but need to write a few new ones, probably a ‘board.cfg’ file. You will be using commands described later in this User’s Guide, and working with the guidelines in the next chapter.
For example, there may be configuration files for your JTAG adapter and target chip, but you need a new board-specific config file giving access to your particular flash chips. Or you might need to write another target chip configuration file for a new chip built around the Cortex M3 core.
Note: When you write new configuration files, please submit them for inclusion in the next OpenOCD release. For example, a ‘board/newboard.cfg’ file will help the next users of that board, and a ‘target/newcpu.cfg’ will help support users of any board using that chip.
- You may may need to write some C code. It may be as simple as a supporting a new ft2232 or parport based adapter; a bit more involved, like a NAND or NOR flash controller driver; or a big piece of work like supporting a new chip architecture.
Reuse the existing config files when you can. Look first in the ‘scripts/boards’ area, then ‘scripts/targets’. You may find a board configuration that’s a good example to follow.
When you write config files, separate the reusable parts (things every user of that interface, chip, or board needs) from ones specific to your environment and debugging approach.
- For example, a
gdb-attachevent handler that invokes the
reset initcommand will interfere with debugging early boot code, which performs some of the same actions that the
reset-initevent handler does.
- Likewise, the
arm9 vector_catchcommand (or its siblings
xscale vector_catchand
cortex_m3 vector_catch) can be a timesaver during some debug sessions, but don’t make everyone use that either. Keep those kinds of debugging aids in your user config file, along with messaging and tracing setup. (See [#Software-Debug-Messages-and-Tracing Software Debug Messages and Tracing].)
- You might need to override some defaults. For example, you might need to move, shrink, or back up the target’s work area if your application needs much SRAM.
- TCP/IP port configuration is another example of something which is environment-specific, and should only appear in a user config file. See [#TCP_002fIP-Ports TCP/IP Ports].
5.5 Project-Specific Utilities
A few project-specific utility routines may well speed up your work. Write them, and keep them in your project’s user config file.
For example, if you are making a boot loader work on a board, it’s nice to be able to debug the “after it’s loaded to RAM†parts separately from the finicky early code which sets up the DDR RAM controller and clocks. A script like this one, or a more GDB-aware sibling, may help:
Then once that code is working you will need to make it boot from NOR flash; a different utility would help. Alternatively, some developers write to flash using GDB. (You might use a similar script if you’re working with a flash based microcontroller application instead of a boot loader.)
You may need more complicated utility procedures when booting from NAND. That often involves an extra bootloader stage, running from on-chip SRAM to perform DDR RAM setup so it can load the main bootloader code (which won’t fit into that SRAM).
Other helper scripts might be used to write production system images, involving considerably more than just a three stage bootloader.
5.6 Target Software Changes
Sometimes you may want to make some small changes to the software you’re developing, to help make JTAG debugging work better. For example, in C or assembly language code you might use
#ifdef JTAG_DEBUG (or its converse) around code handling issues like:
- Watchdog Timers... Watchog timers are typically used to automatically reset systems if some application task doesn’t periodically reset the timer. (The assumption is that the system has locked up if the task can’t run.) When a JTAG debugger halts the system, that task won’t be able to run and reset the timer ... potentially causing resets in the middle of your debug sessions.
It’s rarely a good idea to disable such watchdogs, since their usage needs to be debugged just like all other parts of your firmware. That might however be your only option. Look instead for chip-specific ways to stop the watchdog from counting while the system is in a debug halt state. It may be simplest to set that non-counting mode in your debugger startup scripts. You may however need a different approach when, for example, a motor could be physically damaged by firmware remaining inactive in a debug halt state. That might involve a type of firmware mode where that "non-counting" mode is disabled at the beginning then re-enabled at the end; a watchdog reset might fire and complicate the debug session, but hardware (or people) would be protected.[#FOOT1 (1)]
- ARM Semihosting... When linked with a special runtime library provided with many toolchains[#FOOT2 (2)], your target code can use I/O facilities on the debug host. That library provides a small set of system calls which are handled by OpenOCD. It can let the debugger provide your system console and a file system, helping with early debugging or providing a more capable environment for sometimes-complex tasks like installing system firmware onto NAND or SPI flash.
- ARM Wait-For-Interrupt... Many ARM chips synchronize the JTAG clock using the core clock. Low power states which stop that core clock thus prevent JTAG access. Idle loops in tasking environments often enter those low power states via the
WFIinstruction (or its coprocessor equivalent, before ARMv7).
You may want to disable that instruction in source code, or otherwise prevent using that state, to ensure you can get JTAG access at any time.[#FOOT3 (3)] For example, the OpenOCD
halt command may not work for an idle processor otherwise.
- Delay after reset... Not all chips have good support for debugger access right after reset; many LPC2xxx chips have issues here. Similarly, applications that reconfigure pins used for JTAG access as they start will also block debugger access.
To work with boards like this, enable a short delay loop the first thing after reset, before "real" startup activities. For example, one second’s delay is usually more than enough time for a JTAG debugger to attach, so that early code execution can be debugged or firmware can be replaced.
- Debug Communications Channel (DCC)... Some processors include mechanisms to send messages over JTAG. Many ARM cores support these, as do some cores from other vendors. (OpenOCD may be able to use this DCC internally, speeding up some operations like writing to memory.)
Your application may want to deliver various debugging messages over JTAG, by linking with a small library of code provided with OpenOCD and using the utilities there to send various kinds of message. See [#Software-Debug-Messages-and-Tracing Software Debug Messages and Tracing].
5.7 Target Hardware Setup
Chip vendors often provide software development boards which are highly configurable, so that they can support all options that product boards may require. Make sure that any jumpers or switches match the system configuration you are working with.
Common issues include:
- JTAG setup ... Boards may support more than one JTAG configuration. Examples include jumpers controlling pullups versus pulldowns on the nTRST and/or nSRST signals, and choice of connectors (e.g. which of two headers on the base board, or one from a daughtercard). For some Texas Instruments boards, you may need to jumper the EMU0 and EMU1 signals (which OpenOCD won’t currently control).
- Boot Modes ... Complex chips often support multiple boot modes, controlled by external jumpers. Make sure this is set up correctly. For example many i.MX boards from NXP need to be jumpered to "ATX mode" to start booting using the on-chip ROM, when using second stage bootloader code stored in a NAND flash chip.
Such explicit configuration is common, and not limited to booting from NAND. You might also need to set jumpers to start booting using code loaded from an MMC/SD card; external SPI flash; Ethernet, UART, or USB links; NOR flash; OneNAND flash; some external host; or various other sources.
- Memory Addressing ... Boards which support multiple boot modes may also have jumpers to configure memory addressing. One board, for example, jumpers external chipselect 0 (used for booting) to address either a large SRAM (which must be pre-loaded via JTAG), NOR flash, or NAND flash. When it’s jumpered to address NAND flash, that board must also be told to start booting from on-chip ROM.
Your ‘board.cfg’ file may also need to be told this jumper configuration, so that it can know whether to declare NOR flash using
flash bank or instead declare NAND flash with
nand device; and likewise which probe to perform in its
reset-init handler.
A closely related issue is bus width. Jumpers might need to distinguish between 8 bit or 16 bit bus access for the flash used to start booting.
- Peripheral Access ... Development boards generally provide access to every peripheral on the chip, sometimes in multiple modes (such as by providing multiple audio codec chips). This interacts with software configuration of pin multiplexing, where for example a given pin may be routed either to the MMC/SD controller or the GPIO controller. It also often interacts with configuration jumpers. One jumper may be used to route signals to an MMC/SD card slot or an expansion bus (which might in turn affect booting); others might control which audio or video codecs are used.
Plus you should of course have
reset-init event handlers which set up the hardware to match that jumper configuration. That includes in particular any oscillator or PLL used to clock the CPU, and any memory controllers needed to access external memory and peripherals. Without such handlers, you won’t be able to access those resources without working target firmware which can do that setup ... this can be awkward when you’re trying to debug that target firmware. Even if there’s a ROM bootloader which handles a few issues, it rarely provides full access to all board-specific capabilities.
6. Config File Guidelines
This chapter is aimed at any user who needs to write a config file, including developers and integrators of OpenOCD and any user who needs to get a new board working smoothly. It provides guidelines for creating those files.
You should find the following directories under $(INSTALLDIR)/scripts, with files including the ones listed here. Use them as-is where you can; or as models for new files.
- ‘interface’ ... These are for debug adapters. Files that configure JTAG adapters go here.
- ‘board’ ... think Circuit Board, PWA, PCB, they go by many names. Board files contain initialization items that are specific to a board. They reuse target configuration files, since the same microprocessor chips are used on many boards, but support for external parts varies widely. For example, the SDRAM initialization sequence for the board, or the type of external flash and what address it uses. Any initialization sequence to enable that external flash or SDRAM should be found in the board file. Boards may also contain multiple targets: two CPUs; or a CPU and an FPGA.
- ‘target’ ... think chip. The “target†directory represents the JTAG TAPs on a chip which OpenOCD should control, not a board. Two common types of targets are ARM chips and FPGA or CPLD chips. When a chip has multiple TAPs (maybe it has both ARM and DSP cores), the target config file defines all of them.
- more ... browse for other library files which may be useful. For example, there are various generic and CPU-specific utilities.
The ‘openocd.cfg’ user config file may override features in any of the above files by setting variables before sourcing the target file, or by adding commands specific to their situation.
6.1 Interface Config Files
The user config file should be able to source one of these files with a command like this:
A preconfigured interface file should exist for every debug adapter in use today with OpenOCD. That said, perhaps some of these config files have only been used by the developer who created it.
A separate chapter gives information about how to set these up. See section [#Debug-Adapter-Configuration Debug Adapter Configuration]. Read the OpenOCD source code (and Developer’s Guide) if you have a new kind of hardware interface and need to provide a driver for it.
6.2 Board Config Files
The user config file should be able to source one of these files with a command like this:
The point of a board config file is to package everything about a given board that user config files need to know. In summary the board files should contain (if present)
- One or more
source [target/...cfg]statements
- NOR flash configuration (see [#NOR-Configuration NOR Configuration])
- NAND flash configuration (see [#NAND-Configuration NAND Configuration])
- Target
resethandlers for SDRAM and I/O configuration
- JTAG adapter reset configuration (see section [#Reset-Configuration Reset Configuration])
- All things that are not “inside a chipâ€.
6.2.1 Communication Between Config files
In addition to target-specific utility code, another way that board and target config files communicate is by following a convention on how to use certain variables.
The full Tcl/Tk language supports “namespacesâ€, but Jim-Tcl does not. Thus the rule we follow in OpenOCD is this: Variables that begin with a leading underscore are temporary in nature, and can be modified and used at will within a target configuration file.
Complex board config files can do the things like this, for a board with three chips:
That example is oversimplified because it doesn’t show any flash memory, or the
reset-init event handlers to initialize external DRAM or (assuming it needs it) load a configuration into the FPGA. Such features are usually needed for low-level work with many boards, where “low level†implies that the board initialization software may not be working. (That’s a common reason to need JTAG tools. Another is to enable working with microcontroller-based systems, which often have no debugging support except a JTAG connector.)
Target config files may also export utility functions to board and user config files. Such functions should use name prefixes, to help avoid naming collisions.
Board files could also accept input variables from user config files. For example, there might be a
J4_JUMPER setting used to identify what kind of flash memory a development board is using, or how to set up other clocks and peripherals.
6.2.2 Variable Naming Convention
Most boards have only one instance of a chip. However, it should be easy to create a board with more than one such chip (as shown above). Accordingly, we encourage these conventions for naming variables associated with different ‘target.cfg’ files, to promote consistency and so that board files can override target defaults.
Inputs to target config files include:
CHIPNAME... This gives a name to the overall chip, and is used as part of tap identifier dotted names. While the default is normally provided by the chip manufacturer, board files may need to distinguish between instances of a chip.
ENDIAN... By default ‘little’ - although chips may hard-wire ‘big’. Chips that can’t change endianness don’t need to use this variable.
CPUTAPID... When OpenOCD examines the JTAG chain, it can be told verify the chips against the JTAG IDCODE register. The target file will hold one or more defaults, but sometimes the chip in a board will use a different ID (perhaps a newer revision).
Outputs from target config files include:
_TARGETNAME... By convention, this variable is created by the target configuration script. The board configuration file may make use of this variable to configure things like a “reset init†script, or other things specific to that board and that target. If the chip has 2 targets, the names are
_TARGETNAME0,
_TARGETNAME1, ... etc.
6.2.3 The reset-init Event Handler
Board config files run in the OpenOCD configuration stage; they can’t use TAPs or targets, since they haven’t been fully set up yet. This means you can’t write memory or access chip registers; you can’t even verify that a flash chip is present. That’s done later in event handlers, of which the target
reset-init handler is one of the most important.
Except on microcontrollers, the basic job of
reset-init event handlers is setting up flash and DRAM, as normally handled by boot loaders. Microcontrollers rarely use boot loaders; they run right out of their on-chip flash and SRAM memory. But they may want to use one of these handlers too, if just for developer convenience.
Note: Because this is so very board-specific, and chip-specific, no examples are included here. Instead, look at the board config files distributed with OpenOCD. If you have a boot loader, its source code will help; so will configuration files for other JTAG tools (see [#Translating-Configuration-Files Translating Configuration Files]).
Some of this code could probably be shared between different boards. For example, setting up a DRAM controller often doesn’t differ by much except the bus width (16 bits or 32?) and memory timings, so a reusable TCL procedure loaded by the ‘target.cfg’ file might take those as parameters. Similarly with oscillator, PLL, and clock setup; and disabling the watchdog. Structure the code cleanly, and provide comments to help the next developer doing such work. (You might be that next person trying to reuse init code!)
The last thing normally done in a
reset-init handler is probing whatever flash memory was configured. For most chips that needs to be done while the associated target is halted, either because JTAG memory access uses the CPU or to prevent conflicting CPU access.
6.2.4 JTAG Clock Rate
Before your
reset-init handler has set up the PLLs and clocking, you may need to run with a low JTAG clock rate. See [#JTAG-Speed JTAG Speed]. Then you’d increase that rate after your handler has made it possible to use the faster JTAG clock. When the initial low speed is board-specific, for example because it depends on a board-specific oscillator speed, then you should probably set it up in the board config file; if it’s target-specific, it belongs in the target config file.
For most ARM-based processors the fastest JTAG clock[#FOOT4 (4)] is one sixth of the CPU clock; or one eighth for ARM11 cores. Consult chip documentation to determine the peak JTAG clock rate, which might be less than that.
Warning: On most ARMs, JTAG clock detection is coupled to the core clock, so software using a ‘wait for interrupt’ operation blocks JTAG access. Adaptive clocking provides a partial workaround, but a more complete solution just avoids using that instruction with JTAG debuggers.
If both the chip and the board support adaptive clocking, use the
jtag_rclk command, in case your board is used with JTAG adapter which also supports it. Otherwise use
adapter_khz. Set the slow rate at the beginning of the reset sequence, and the faster rate as soon as the clocks are at full speed.
6.2.5 The init_board procedure
The concept of
init_board procedure is very similar to
init_targets (See [#The-init_005ftargets-procedure The init_targets procedure].) - it’s a replacement of “linear†configuration scripts. This procedure is meant to be executed when OpenOCD enters run stage (See [#Entering-the-Run-Stage Entering the Run Stage],) after
init_targets. The idea to have spearate
init_targets and
init_board procedures is to allow the first one to configure everything target specific (internal flash, internal RAM, etc.) and the second one to configure everything board specific (reset signals, chip frequency, reset-init event handler, external memory, etc.). Additionally “linear†board config file will most likely fail when target config file uses
init_targets scheme (“linear†script is executed before
init and
init_targets - after), so separating these two configuration stages is very convenient, as the easiest way to overcome this problem is to convert board config file to use
init_board procedure. Board config scripts don’t need to override
init_targets defined in target config files when they only need to to add some specifics.
Just as
init_targets, the
init_board procedure can be overriden by “next level†script (which sources the original), allowing greater code reuse.
6.3 Target Config Files
Board config files communicate with target config files using naming conventions as described above, and may source one or more target config files like this:
The point of a target config file is to package everything about a given chip that board config files need to know. In summary the target files should contain
- Set defaults
- Add TAPs to the scan chain
- Add CPU targets (includes GDB support)
- CPU/Chip/CPU-Core specific features
- On-Chip flash
As a rule of thumb, a target file sets up only one chip. For a microcontroller, that will often include a single TAP, which is a CPU needing a GDB target, and its on-chip flash.
More complex chips may include multiple TAPs, and the target config file may need to define them all before OpenOCD can talk to the chip. For example, some phone chips have JTAG scan chains that include an ARM core for operating system use, a DSP, another ARM core embedded in an image processing engine, and other processing engines.
6.3.1 Default Value Boiler Plate Code
All target configuration files should start with code like this, letting board config files express environment-specific differences in how things should be set up.
Remember: Board config files may include multiple target config files, or the same target file multiple times (changing at least
CHIPNAME).
Likewise, the target configuration file should define
_TARGETNAME (or
_TARGETNAME0 etc) and use it later on when defining debug targets:
6.3.2 Adding TAPs to the Scan Chain
After the “defaults†are set up, add the TAPs on each chip to the JTAG scan chain. See section [#TAP-Declaration TAP Declaration], and the naming convention for taps.
In the simplest case the chip has only one TAP, probably for a CPU or FPGA. The config file for the Atmel AT91SAM7X256 looks (in part) like this:
A board with two such at91sam7 chips would be able to source such a config file twice, with different values for
CHIPNAME, so it adds a different TAP each time.
If there are nonzero ‘-expected-id’ values, OpenOCD attempts to verify the actual tap id against those values. It will issue error messages if there is mismatch, which can help to pinpoint problems in OpenOCD configurations.
There are more complex examples too, with chips that have multiple TAPs. Ones worth looking at include:
- ‘target/omap3530.cfg’ – with disabled ARM and DSP, plus a JRC to enable them
- ‘target/str912.cfg’ – with flash, CPU, and boundary scan
- ‘target/ti_dm355.cfg’ – with ETM, ARM, and JRC (this JRC is not currently used)
6.3.3 Add CPU targets
After adding a TAP for a CPU, you should set it up so that GDB and other commands can use it. See section [#CPU-Configuration CPU Configuration]. For the at91sam7 example above, the command can look like this; note that
$_ENDIAN is not needed, since OpenOCD defaults to little endian, and this chip doesn’t support changing that.
Work areas are small RAM areas associated with CPU targets. They are used by OpenOCD to speed up downloads, and to download small snippets of code to program flash chips. If the chip includes a form of “on-chip-ram†- and many do - define a work area if you can. Again using the at91sam7 as an example, this can look like:
6.3.4 Define CPU targets working in SMP
After setting targets, you can define a list of targets working in SMP.
In the above example on cortex_a8, 2 cpus are working in SMP. In SMP only one GDB instance is created and :
- a set of hardware breakpoint sets the same breakpoint on all targets in the list.
- halt command triggers the halt of all targets in the list.
- resume command triggers the write context and the restart of all targets in the list.
- following a breakpoint: the target stopped by the breakpoint is displayed to the GDB session.
- dedicated GDB serial protocol packets are implemented for switching/retrieving the target displayed by the GDB session see [#Using-openocd-SMP-with-GDB Using openocd SMP with GDB].
The SMP behaviour can be disabled/enabled dynamically. On cortex_a8 following command have been implemented.
- cortex_a8 smp_on : enable SMP mode, behaviour is as described above.
- cortex_a8 smp_off : disable SMP mode, the current target is the one displayed in the GDB session, only this target is now controlled by GDB session. This behaviour is useful during system boot up.
- cortex_a8 smp_gdb : display/fix the core id displayed in GDB session see following example.
6.3.5 Chip Reset Setup
As a rule, you should put the
reset_config command into the board file. Most things you think you know about a chip can be tweaked by the board.
Some chips have specific ways the TRST and SRST signals are managed. In the unusual case that these are chip specific and can never be changed by board wiring, they could go here. For example, some chips can’t support JTAG debugging without both signals.
Provide a
reset-assert event handler if you can. Such a handler uses JTAG operations to reset the target, letting this target config be used in systems which don’t provide the optional SRST signal, or on systems where you don’t want to reset all targets at once. Such a handler might write to chip registers to force a reset, use a JRC to do that (preferable – the target may be wedged!), or force a watchdog timer to trigger. (For Cortex-M3 targets, this is not necessary. The target driver knows how to use trigger an NVIC reset when SRST is not available.)
Some chips need special attention during reset handling if they’re going to be used with JTAG. An example might be needing to send some commands right after the target’s TAP has been reset, providing a
reset-deassert-post event handler that writes a chip register to report that JTAG debugging is being done. Another would be reconfiguring the watchdog so that it stops counting while the core is halted in the debugger.
JTAG clocking constraints often change during reset, and in some cases target config files (rather than board config files) are the right places to handle some of those issues. For example, immediately after reset most chips run using a slower clock than they will use later. That means that after reset (and potentially, as OpenOCD first starts up) they must use a slower JTAG clock rate than they will use later. See [#JTAG-Speed JTAG Speed].
Important: When you are debugging code that runs right after chip reset, getting these issues right is critical. In particular, if you see intermittent failures when OpenOCD verifies the scan chain after reset, look at how you are setting up JTAG clocking.
6.3.6 The init_targets procedure
Target config files can either be “linear†(script executed line-by-line when parsed in configuration stage, See [#Configuration-Stage Configuration Stage],) or they can contain a special procedure called
init_targets, which will be executed when entering run stage (after parsing all config files or after
init command, See [#Entering-the-Run-Stage Entering the Run Stage].) Such procedure can be overriden by “next level†script (which sources the original). This concept faciliates code reuse when basic target config files provide generic configuration procedures and
init_targets procedure, which can then be sourced and enchanced or changed in a “more specific†target config file. This is not possible with “linear†config scripts, because sourcing them executes every initialization commands they provide.
The easiest way to convert “linear†config files to
init_targets version is to enclose every line of “code†(i.e. not
source commands, procedures, etc.) in this procedure.
For an example of this scheme see LPC2000 target config files.
The
init_boards procedure is a similar concept concerning board config files (See [#The-init_005fboard-procedure The init_board procedure].)
6.3.7 ARM Core Specific Hacks
If the chip has a DCC, enable it. If the chip is an ARM9 with some special high speed download features - enable it.
If present, the MMU, the MPU and the CACHE should be disabled.
Some ARM cores are equipped with trace support, which permits examination of the instruction and data bus activity. Trace activity is controlled through an “Embedded Trace Module†(ETM) on one of the core’s scan chains. The ETM emits voluminous data through a “trace portâ€. (See [#ARM-Hardware-Tracing ARM Hardware Tracing].) If you are using an external trace port, configure it in your board config file. If you are using an on-chip “Embedded Trace Buffer†(ETB), configure it in your target config file.
6.3.8 Internal Flash Configuration
This applies ONLY TO MICROCONTROLLERS that have flash built in.
Never ever in the “target configuration file†define any type of flash that is external to the chip. (For example a BOOT flash on Chip Select 0.) Such flash information goes in a board file - not the TARGET (chip) file.
Examples:
- at91sam7x256 - has 256K flash YES enable it.
- str912 - has flash internal YES enable it.
- imx27 - uses boot flash on CS0 - it goes in the board file.
- pxa270 - again - CS0 flash - it goes in the board file.
6.4 Translating Configuration Files
If you have a configuration file for another hardware debugger or toolset (Abatron, BDI2000, BDI3000, CCS, Lauterbach, Segger, Macraigor, etc.), translating it into OpenOCD syntax is often quite straightforward. The most tricky part of creating a configuration script is oftentimes the reset init sequence where e.g. PLLs, DRAM and the like is set up.
One trick that you can use when translating is to write small Tcl procedures to translate the syntax into OpenOCD syntax. This can avoid manual translation errors and make it easier to convert other scripts later on.
Example of transforming quirky arguments to a simple search and replace job:
7. Daemon Configuration
The commands here are commonly found in the openocd.cfg file and are used to specify what TCP/IP ports are used, and how GDB should be supported.
7.1 Configuration Stage
When the OpenOCD server process starts up, it enters a configuration stage which is the only time that certain commands, configuration commands, may be issued. Normally, configuration commands are only available inside startup scripts.
In this manual, the definition of a configuration command is presented as a Config Command, not as a Command which may be issued interactively. The runtime
help command also highlights configuration commands, and those which may be issued at any time.
Those configuration commands include declaration of TAPs, flash banks, the interface used for JTAG communication, and other basic setup. The server must leave the configuration stage before it may access or activate TAPs. After it leaves this stage, configuration commands may no longer be issued.
7.2 Entering the Run Stage
The first thing OpenOCD does after leaving the configuration stage is to verify that it can talk to the scan chain (list of TAPs) which has been configured. It will warn if it doesn’t find TAPs it expects to find, or finds TAPs that aren’t supposed to be there. You should see no errors at this point. If you see errors, resolve them by correcting the commands you used to configure the server. Common errors include using an initial JTAG speed that’s too fast, and not providing the right IDCODE values for the TAPs on the scan chain.
Once OpenOCD has entered the run stage, a number of commands become available. A number of these relate to the debug targets you may have declared. For example, the
mww command will not be available until a target has been successfuly instantiated. If you want to use those commands, you may need to force entry to the run stage.
- Config Command: init
- This command terminates the configuration stage and enters the run stage. This helps when you need to have the startup scripts manage tasks such as resetting the target, programming flash, etc. To reset the CPU upon startup, add "init" and "reset" at the end of the config script or at the end of the OpenOCD command line using the ‘-c’ command line switch.
If this command does not appear in any startup/configuration file OpenOCD executes the command for you after processing all configuration files and/or command line options.
NOTE: This command normally occurs at or near the end of your openocd.cfg file to force OpenOCD to “initialize†and make the targets ready. For example: If your openocd.cfg file needs to read/write memory on your target,
init must occur before the memory read/write commands. This includes
nand probe.
- Overridable Procedure: jtag_init
- This is invoked at server startup to verify that it can talk to the scan chain (list of TAPs) which has been configured.
The default implementation first tries
jtag arp_init, which uses only a lightweight JTAG reset before examining the scan chain. If that fails, it tries again, using a harder reset from the overridable procedure
init_reset.
Implementations must have verified the JTAG scan chain before they return. This is done by calling
jtag arp_init (or
jtag arp_init-reset).
7.3 TCP/IP Ports
The OpenOCD server accepts remote commands in several syntaxes. Each syntax uses a different TCP/IP port, which you may specify only during configuration (before those ports are opened).
For reasons including security, you may wish to prevent remote access using one or more of these ports. In such cases, just specify the relevant port number as zero. If you disable all access through TCP/IP, you will need to use the command line ‘-pipe’ option.
- Command: gdb_port [number]
- Normally gdb listens to a TCP/IP port, but GDB can also communicate via pipes(stdin/out or named pipes). The name "gdb_port" stuck because it covers probably more than 90% of the normal use cases.
No arguments reports GDB port. "pipe" means listen to stdin output to stdout, an integer is base port number, "disable" disables the gdb server. When using "pipe", also use log_output to redirect the log output to a file so as not to flood the stdin/out pipes. The -p/–pipe option is deprecated and a warning is printed as it is equivalent to passing in -c "gdb_port pipe; log_output openocd.log". Any other string is interpreted as named pipe to listen to. Output pipe is the same name as input pipe, but with ’o’ appended, e.g. /var/gdb, /var/gdbo. The GDB port for the first target will be the base port, the second target will listen on gdb_port 1, and so on. When not specified during the configuration stage, the port number defaults to 3333.
- Command: tcl_port [number]
- Specify or query the port used for a simplified RPC connection that can be used by clients to issue TCL commands and get the output from the Tcl engine. Intended as a machine interface. When not specified during the configuration stage, the port number defaults to 6666.
- Command: telnet_port [number]
- Specify or query the port on which to listen for incoming telnet connections. This port is intended for interaction with one human through TCL commands. When not specified during the configuration stage, the port number defaults to 4444. When specified as zero, this port is not activated.
7.4 GDB Configuration
You can reconfigure some GDB behaviors if needed. The ones listed here are static and global. See [#Target-Configuration Target Configuration], about configuring individual targets. See [#Target-Events Target Events], about configuring target-specific event handling.
- Command: gdb_breakpoint_override [‘hard’|‘soft’|‘disable’]
- Force breakpoint type for gdb
breakcommands. This option supports GDB GUIs which don’t distinguish hard versus soft breakpoints, if the default OpenOCD and GDB behaviour is not sufficient. GDB normally uses hardware breakpoints if the memory map has been set up for flash regions.
- Config Command: gdb_flash_program (‘enable’|‘disable’)
- Set to ‘enable’ to cause OpenOCD to program the flash memory when a vFlash packet is received. The default behaviour is ‘enable’.
- Config Command: gdb_memory_map (‘enable’|‘disable’)
- Set to ‘enable’ to cause OpenOCD to send the memory configuration to GDB when requested. GDB will then know when to set hardware breakpoints, and program flash using the GDB load command.
gdb_flash_program enablemust also be enabled for flash programming to work. Default behaviour is ‘enable’. See [#gdb_005fflash_005fprogram gdb_flash_program].
- Config Command: gdb_report_data_abort (‘enable’|‘disable’)
- Specifies whether data aborts cause an error to be reported by GDB memory read packets. The default behaviour is ‘disable’; use ‘enable’ see these errors reported.
7.5 Event Polling
Hardware debuggers are parts of asynchronous systems, where significant events can happen at any time. The OpenOCD server needs to detect some of these events, so it can report them to through TCL command line or to GDB.
Examples of such events include:
- One of the targets can stop running ... maybe it triggers a code breakpoint or data watchpoint, or halts itself.
- Messages may be sent over “debug message†channels ... many targets support such messages sent over JTAG, for receipt by the person debugging or tools.
- Loss of power ... some adapters can detect these events.
- Resets not issued through JTAG ... such reset sources can include button presses or other system hardware, sometimes including the target itself (perhaps through a watchdog).
- Debug instrumentation sometimes supports event triggering such as “trace buffer full†(so it can quickly be emptied) or other signals (to correlate with code behavior).
None of those events are signaled through standard JTAG signals. However, most conventions for JTAG connectors include voltage level and system reset (SRST) signal detection. Some connectors also include instrumentation signals, which can imply events when those signals are inputs.
In general, OpenOCD needs to periodically check for those events, either by looking at the status of signals on the JTAG connector or by sending synchronous “tell me your status†JTAG requests to the various active targets. There is a command to manage and monitor that polling, which is normally done in the background.
- Command: poll [‘on’|‘off’]
- Poll the current target for its current state. (Also, see [#target-curstate target curstate].) If that target is in debug mode, architecture specific information about the current state is printed. An optional parameter allows background polling to be enabled and disabled.
You could use this from the TCL command shell, or from GDB using
monitor poll command. Leave background polling enabled while you’re using GDB.
8. Debug Adapter Configuration
Correctly installing OpenOCD includes making your operating system give OpenOCD access to debug adapters. Once that has been done, Tcl commands are used to select which one is used, and to configure how it is used.
Note: Because OpenOCD started out with a focus purely on JTAG, you may find places where it wrongly presumes JTAG is the only transport protocol in use. Be aware that recent versions of OpenOCD are removing that limitation. JTAG remains more functional than most other transports. Other transports do not support boundary scan operations, or may be specific to a given chip vendor. Some might be usable only for programming flash memory, instead of also for debugging.
Debug Adapters/Interfaces/Dongles are normally configured through commands in an interface configuration file which is sourced by your ‘openocd.cfg’ file, or through a command line ‘-f interface/....cfg’ option.
These commands tell OpenOCD what type of JTAG adapter you have, and how to talk to it. A few cases are so simple that you only need to say what driver to use:
Most adapters need a bit more configuration than that.
8.1 Interface Configuration
The interface command tells OpenOCD what type of debug adapter you are using. Depending on the type of adapter, you may need to use one or more additional commands to further identify or configure the adapter.
- Config Command: interface name
- Use the interface driver name to connect to the target.
- Command: interface_list
- List the debug adapter drivers that have been built into the running copy of OpenOCD.
- Command: interface transports transport_name
- Specifies the transports supported by this debug adapter. The adapter driver builds-in similar knowledge; use this only when external configuration (such as jumpering) changes what the hardware can support.
- Command: adapter_name
- Returns the name of the debug adapter driver being used.
8.2 Interface Drivers
Each of the interface drivers listed here must be explicitly enabled when OpenOCD is configured, in order to be made available at run time.
- Interface Driver: amt_jtagaccel
- Amontec Chameleon in its JTAG Accelerator configuration, connected to a PC’s EPP mode parallel port. This defines some driver-specific commands:
- Config Command: parport_port number
- Specifies either the address of the I/O port (default: 0x378 for LPT1) or the number of the ‘/dev/parport’ device.
- Config Command: rtck [‘enable’|‘disable’]
- Displays status of RTCK option. Optionally sets that option first.
- Interface Driver: arm-jtag-ew
- Olimex ARM-JTAG-EW USB adapter This has one driver-specific command:
- Command: armjtagew_info
- Logs some status
- Interface Driver: at91rm9200
- Supports bitbanged JTAG from the local system, presuming that system is an Atmel AT91rm9200 and a specific set of GPIOs is used.
- Interface Driver: dummy
- A dummy software-only driver for debugging.
- Interface Driver: ep93xx
- Cirrus Logic EP93xx based single-board computer bit-banging (in development)
- Interface Driver: ft2232
- FTDI FT2232 (USB) based devices over one of the userspace libraries. These interfaces have several commands, used to configure the driver before initializing the JTAG scan chain:
- Config Command: ft2232_device_desc description
- Provides the USB device description (the iProduct string) of the FTDI FT2232 device. If not specified, the FTDI default value is used. This setting is only valid if compiled with FTD2XX support.
- Config Command: ft2232_serial serial-number
- Specifies the serial-number of the FTDI FT2232 device to use, in case the vendor provides unique IDs and more than one FT2232 device is connected to the host. If not specified, serial numbers are not considered. (Note that USB serial numbers can be arbitrary Unicode strings, and are not restricted to containing only decimal digits.)
- Config Command: ft2232_layout name
- Each vendor’s FT2232 device can use different GPIO signals to control output-enables, reset signals, and LEDs. Currently valid layout name values include:
- - axm0432_jtag Axiom AXM-0432
- - comstick Hitex STR9 comstick
- - cortino Hitex Cortino JTAG interface
- - evb_lm3s811 Luminary Micro EVB_LM3S811 as a JTAG interface, either for the local Cortex-M3 (SRST only) or in a passthrough mode (neither SRST nor TRST) This layout can not support the SWO trace mechanism, and should be used only for older boards (before rev C).
- - luminary_icdi This layout should be used with most Luminary eval boards, including Rev C LM3S811 eval boards and the eponymous ICDI boards, to debug either the local Cortex-M3 or in passthrough mode to debug some other target. It can support the SWO trace mechanism.
- - flyswatter Tin Can Tools Flyswatter
- - icebear ICEbear JTAG adapter from Section 5
- - jtagkey Amontec JTAGkey and JTAGkey-Tiny (and compatibles)
- - jtagkey2 Amontec JTAGkey2 (and compatibles)
- - m5960 American Microsystems M5960
- - olimex-jtag Olimex ARM-USB-OCD and ARM-USB-Tiny
- - oocdlink OOCDLink
- - redbee-econotag Integrated with a Redbee development board.
- - redbee-usb Integrated with a Redbee USB-stick development board.
- - sheevaplug Marvell Sheevaplug development kit
- - signalyzer Xverve Signalyzer
- - stm32stick Hitex STM32 Performance Stick
- - turtelizer2 egnite Software turtelizer2
- - usbjtag "USBJTAG-1" layout described in the OpenOCD diploma thesis
- Config Command: ft2232_vid_pid [vid pid]
- The vendor ID and product ID of the FTDI FT2232 device. If not specified, the FTDI default values are used. Currently, up to eight [vid, pid] pairs may be given, e.g.
- Config Command: ft2232_latency ms
- On some systems using FT2232 based JTAG interfaces the FT_Read function call in ft2232_read() fails to return the expected number of bytes. This can be caused by USB communication delays and has proved hard to reproduce and debug. Setting the FT2232 latency timer to a larger value increases delays for short USB packets but it also reduces the risk of timeouts before receiving the expected number of bytes. The OpenOCD default value is 2 and for some systems a value of 10 has proved useful.
For example, the interface config file for a Turtelizer JTAG Adapter looks something like this:
- Interface Driver: remote_bitbang
- Drive JTAG from a remote process. This sets up a UNIX or TCP socket connection with a remote process and sends ASCII encoded bitbang requests to that process instead of directly driving JTAG.
The remote_bitbang driver is useful for debugging software running on processors which are being simulated.
- Config Command: remote_bitbang_port number
- Specifies the TCP port of the remote process to connect to or 0 to use UNIX sockets instead of TCP.
- Config Command: remote_bitbang_host hostname
- Specifies the hostname of the remote process to connect to using TCP, or the name of the UNIX socket to use if remote_bitbang_port is 0.
For example, to connect remotely via TCP to the host foobar you might have something like:
To connect to another process running locally via UNIX sockets with socket named mysocket:
- Interface Driver: usb_blaster
- USB JTAG/USB-Blaster compatibles over one of the userspace libraries for FTDI chips. These interfaces have several commands, used to configure the driver before initializing the JTAG scan chain:
- Config Command: usb_blaster_device_desc description
- Provides the USB device description (the iProduct string) of the FTDI FT245 device. If not specified, the FTDI default value is used. This setting is only valid if compiled with FTD2XX support.
- Config Command: usb_blaster_vid_pid vid pid
- The vendor ID and product ID of the FTDI FT245 device. If not specified, default values are used. Currently, only one vid, pid pair may be given, e.g. for Altera USB-Blaster (default):
The following VID/PID is for Kolja Waschk’s USB JTAG:
- Command: usb_blaster (‘pin6’|‘pin8’) (‘0’|‘1’)
- Sets the state of the unused GPIO pins on USB-Blasters (pins 6 and 8 on the female JTAG header). These pins can be used as SRST and/or TRST provided the appropriate connections are made on the target board.
For example, to use pin 6 as SRST (as with an AVR board):
- Interface Driver: gw16012
- Gateworks GW16012 JTAG programmer. This has one driver-specific command:
-.
- Interface Driver: jlink
- Segger jlink USB adapter
- Interface Driver: parport
- Supports PC parallel port bit-banging cables: Wigglers, PLD download cable, and more. These interfaces have several commands, used to configure the driver before initializing the JTAG scan chain:
- Config Command: parport_cable name
- Set the layout of the parallel port cable used to connect to the target. This is a write-once setting. Currently valid cable name values include:
- - altium Altium Universal JTAG cable.
- - arm-jtag Same as original wiggler except SRST and TRST connections reversed and TRST is also inverted.
- - chameleon The Amontec Chameleon’s CPLD when operated in configuration mode. This is only used to program the Chameleon itself, not a connected target.
- - dlc5 The Xilinx Parallel cable III.
- - flashlink The ST Parallel cable.
- - lattice Lattice ispDOWNLOAD Cable
- - old_amt_wiggler The Wiggler configuration that comes with some versions of Amontec’s Chameleon Programmer. The new version available from the website uses the original Wiggler layout (’wiggler’)
- - triton The parallel port adapter found on the “Karo Triton 1 Development Boardâ€. This is also the layout used by the HollyGates design (see).
- - wiggler The original Wiggler layout, also supported by several clones, such as the Olimex ARM-JTAG
- - wiggler2 Same as original wiggler except an led is fitted on D5.
- - wiggler_ntrst_inverted Same as original wiggler except TRST is inverted.
-.
When using PPDEV to access the parallel port, use the number of the parallel port: ‘parport_port 0’ (the default). If ‘parport_port 0x378’ is specified you may encounter a problem.
- Command: parport_toggling_time [nanoseconds]
- Displays how many nanoseconds the hardware needs to toggle TCK; the parport driver uses this value to obey the
adapter_khzconfiguration. When the optional nanoseconds parameter is given, that setting is changed before displaying the current value.
The default setting should work reasonably well on commodity PC hardware. However, you may want to calibrate for your specific hardware.
Tip: To measure the toggling time with a logic analyzer or a digital storage oscilloscope, follow the procedure below:
This sets the maximum JTAG clock speed of the hardware, but the actual speed probably deviates from the requested 500 kHz. Now, measure the time between the two closest spaced TCK transitions. You can use
runtest 1000or something similar to generate a large set of samples. Update the setting to match your measurement:
Now the clock speed will be a better match for
adapter_khz ratecommands given in OpenOCD scripts and event handlers. You can do something similar with many digital multimeters, but note that you’ll probably need to run the clock continuously for several seconds before it decides what clock rate to show. Adjust the toggling time up or down until the measured clock rate is a good match for the adapter_khz rate you specified; be conservative.
- Config Command: parport_write_on_exit (‘on’|‘off’)
- This will configure the parallel driver to write a known cable-specific value to the parallel interface on exiting OpenOCD.
For example, the interface configuration file for a classic “Wiggler†cable on LPT2 might look something like this:
- Interface Driver: presto
- ASIX PRESTO USB JTAG programmer.
- Config Command: presto_serial serial_string
- Configures the USB serial number of the Presto device to use.
- Interface Driver: rlink
- Raisonance RLink USB adapter
- Interface Driver: usbprog
- usbprog is a freely programmable USB adapter.
- Interface Driver: vsllink
- vsllink is part of Versaloon which is a versatile USB programmer.
Note: This defines quite a few driver-specific commands, which are not currently documented here.
- Interface Driver: stlink
- ST Micro ST-LINK adapter.
- Interface Driver: ZY1000
- This is the Zylin ZY1000 JTAG debugger.
Note: This defines some driver-specific commands, which are not currently documented here.
- Command: power [‘on’|‘off’]
- Turn power switch to target on/off. No arguments: print status.
8.3 Transport Configuration
As noted earlier, depending on the version of OpenOCD you use, and the debug adapter you are using, several transports may be available to communicate with debug targets (or perhaps to program flash memory).
- Command: transport list
- displays the names of the transports supported by this version of OpenOCD.
- Command: transport select transport_name
- Select which of the supported transports to use in this OpenOCD session. The transport must be supported by the debug adapter hardware and by the version of OPenOCD you are using (including the adapter’s driver). No arguments: returns name of session’s selected transport.
8.3.1 JTAG Transport
JTAG is the original transport supported by OpenOCD, and most of the OpenOCD commands support it. JTAG transports expose a chain of one or more Test Access Points (TAPs), each of which must be explicitly declared. JTAG supports both debugging and boundary scan testing. Flash programming support is built on top of debug support.
8.3.2 SWD Transport
SWD (Serial Wire Debug) is an ARM-specific transport which exposes one Debug Access Point (DAP, which must be explicitly declared. (SWD uses fewer signal wires than JTAG.) SWD is debug-oriented, and does not support boundary scan testing. Flash programming support is built on top of debug support. (Some processors support both JTAG and SWD.)
- Command: swd newdap ...
- Declares a single DAP which uses SWD transport. Parameters are currently the same as "jtag newtap" but this is expected to change.
- Command: swd wcr trn prescale
- Updates TRN (turnaraound delay) and prescaling.fields of the Wire Control Register (WCR). No parameters: displays current settings.
8.3.3 SPI Transport
The Serial Peripheral Interface (SPI) is a general purpose transport which uses four wire signaling. Some processors use it as part of a solution for flash programming.
8.4 JTAG Speed
JTAG clock setup is part of system setup. It does not belong with interface setup since any interface only knows a few of the constraints for the JTAG clock speed..
The speed used during reset, and the scan chain verification which follows reset, can be adjusted using a
reset-start target event handler. It can then be reconfigured to a faster speed by a
reset-init target event handler after it reprograms those CPU clocks, or manually (if something else, such as a boot loader, sets up those clocks). See [#Target-Events Target Events]. When the initial low JTAG speed is a chip characteristic, perhaps because of a required oscillator speed, provide such a handler in the target config file. When that speed is a function of a board-specific characteristic such as which speed oscillator is used, it belongs in the board config file instead. In both cases it’s safest to also set the initial JTAG clock rate to that same slow speed, so that OpenOCD never starts up using a clock speed that’s faster than the scan chain can support.
If your system supports adaptive clocking (RTCK), configuring JTAG to use that is probably the most robust approach. However, it introduces delays to synchronize clocks; so it may not be the fastest solution.
NOTE: Script writers should consider using
jtag_rclk instead of
adapter_khz, but only for (ARM) cores and boards which support adaptive clocking.
- Command: adapter_khz max_speed_kHz
- A non-zero speed is in KHZ. Hence: 3000 is 3mhz. JTAG interfaces usually support a limited number of speeds. The speed actually used won’t be faster than the speed specified.
Chip data sheets generally include a top JTAG clock rate. The actual rate is often a function of a CPU core clock, and is normally less than that peak rate. For example, most ARM cores accept at most one sixth of the CPU clock. Speed 0 (khz) selects RTCK method. See [#FAQ-RTCK FAQ RTCK]. If your system uses RTCK, you won’t need to change the JTAG clocking after setup. Not all interfaces, boards, or targets support “rtckâ€. If the interface device can not support it, an error is returned when you try to use RTCK.
- Function: jtag_rclk fallback_speed_kHz
- This Tcl proc (defined in ‘startup.tcl’) attempts to enable RTCK/RCLK. If that fails (maybe the interface, board, or target doesn’t support it), falls back to the specified frequency.
9. Reset Configuration
Every system configuration may require a different reset configuration. This can also be quite confusing. Resets also interact with reset-init event handlers, which do things like setting up clocks and DRAM, and JTAG clock rates. (See [#JTAG-Speed JTAG Speed].) They can also interact with JTAG routers. Please see the various board files for examples.
Note: To maintainers and integrators: Reset configuration touches several things at once. Normally the board configuration file should define it and assume that the JTAG adapter supports everything that’s wired up to the board’s JTAG connector.
However, the target configuration file could also make note of something the silicon vendor has done inside the chip, which will be true for most (or all) boards using that chip. And when the JTAG adapter doesn’t support everything, the user configuration file will need to override parts of the reset configuration provided by other files.
9.1 Types of Reset
There are many kinds of reset possible through JTAG, but they may not all work with a given board and adapter. That’s part of why reset configuration can be error prone.
- System Reset ... the SRST hardware signal resets all chips connected to the JTAG adapter, such as processors, power management chips, and I/O controllers. Normally resets triggered with this signal behave exactly like pressing a RESET button.
- JTAG TAP Reset ... the TRST hardware signal resets just the TAP controllers connected to the JTAG adapter. Such resets should not be visible to the rest of the system; resetting a device’s TAP controller just puts that controller into a known state.
- Emulation Reset ... many devices can be reset through JTAG commands. These resets are often distinguishable from system resets, either explicitly (a "reset reason" register says so) or implicitly (not all parts of the chip get reset).
- Other Resets ... system-on-chip devices often support several other types of reset. You may need to arrange that a watchdog timer stops while debugging, preventing a watchdog reset. There may be individual module resets.
In the best case, OpenOCD can hold SRST, then reset the TAPs via TRST and send commands through JTAG to halt the CPU at the reset vector before the 1st instruction is executed. Then when it finally releases the SRST signal, the system is halted under debugger control before any code has executed. This is the behavior required to support the
reset halt and
reset init commands; after
reset init a board-specific script might do things like setting up DRAM. (See [#Reset-Command Reset Command].)
9.2 SRST and TRST Issues
Because SRST and TRST are hardware signals, they can have a variety of system-specific constraints. Some of the most common issues are:
- Signal not available ... Some boards don’t wire SRST or TRST to the JTAG connector. Some JTAG adapters don’t support such signals even if they are wired up. Use the
reset_configsignals options to say when either of those signals is not connected. When SRST is not available, your code might not be able to rely on controllers having been fully reset during code startup. Missing TRST is not a problem, since JTAG-level resets can be triggered using with TMS signaling.
- Signals shorted ... Sometimes a chip, board, or adapter will connect SRST to TRST, instead of keeping them separate. Use the
reset_configcombination options to say when those signals aren’t properly independent.
- Timing ... Reset circuitry like a resistor/capacitor delay circuit, reset supervisor, or on-chip features can extend the effect of a JTAG adapter’s reset for some time after the adapter stops issuing the reset. For example, there may be chip or board requirements that all reset pulses last for at least a certain amount of time; and reset buttons commonly have hardware debouncing. Use the
adapter_nsrst_delayand
jtag_ntrst_delaycommands to say when extra delays are needed.
- Drive type ... Reset lines often have a pullup resistor, letting the JTAG interface treat them as open-drain signals. But that’s not a requirement, so the adapter may need to use push/pull output drivers. Also, with weak pullups it may be advisable to drive signals to both levels (push/pull) to minimize rise times. Use the
reset_configtrst_type and srst_type parameters to say how to drive reset signals.
- Special initialization ... Targets sometimes need special JTAG initialization sequences to handle chip-specific issues (not limited to errata). For example, certain JTAG commands might need to be issued while the system as a whole is in a reset state (SRST active) but the JTAG scan chain is usable (TRST inactive). Many systems treat combined assertion of SRST and TRST as a trigger for a harder reset than SRST alone. Such custom reset handling is discussed later in this chapter.
There can also be other issues. Some devices don’t fully conform to the JTAG specifications. Trivial system-specific differences are common, such as SRST and TRST using slightly different names. There are also vendors who distribute key JTAG documentation for their chips only to developers who have signed a Non-Disclosure Agreement (NDA).
Sometimes there are chip-specific extensions like a requirement to use the normally-optional TRST signal (precluding use of JTAG adapters which don’t pass TRST through), or needing extra steps to complete a TAP reset.
In short, SRST and especially TRST handling may be very finicky, needing to cope with both architecture and board specific constraints.
9.3 Commands for Handling Resets
- Command: adapter_nsrst_assert_width milliseconds
- Minimum amount of time (in milliseconds) OpenOCD should wait after asserting nSRST (active-low system reset) before allowing it to be deasserted.
- Command: adapter_nsrst_delay milliseconds
- How long (in milliseconds) OpenOCD should wait after deasserting nSRST (active-low system reset) before starting new JTAG operations. When a board has a reset button connected to SRST line it will probably have hardware debouncing, implying you should use this.
- Command: jtag_ntrst_assert_width milliseconds
- Minimum amount of time (in milliseconds) OpenOCD should wait after asserting nTRST (active-low JTAG TAP reset) before allowing it to be deasserted.
- Command: jtag_ntrst_delay milliseconds
- How long (in milliseconds) OpenOCD should wait after deasserting nTRST (active-low JTAG TAP reset) before starting new JTAG operations.
- Command: reset_config mode_flag ...
- This command displays or modifies the reset configuration of your combination of JTAG board and target in target configuration scripts.
Information earlier in this section describes the kind of problems the command is intended to address (see [#SRST-and-TRST-Issues SRST and TRST Issues]). As a rule this command belongs only in board config files, describing issues like board doesn’t connect TRST; or in user config files, addressing limitations derived from a particular combination of interface and board. (An unlikely example would be using a TRST-only adapter with a board that only wires up SRST.) The mode_flag options can be specified in any order, but only one of each type – signals, combination, gates, trst_type, and srst_type – may be specified at a time. If you don’t provide a new value for a given type, its previous value (perhaps the default) is unchanged. For example, this means that you don’t need to say anything at all about TRST just to declare that if the JTAG adapter should want to drive SRST, it must explicitly be driven high (‘srst_push_pull’).
- signals can specify which of the reset signals are connected. For example, If the JTAG interface provides SRST, but the board doesn’t connect that signal properly, then OpenOCD can’t use it. Possible values are ‘none’ (the default), ‘trst_only’, ‘srst_only’ and ‘trst_and_srst’.
Tip: If your board provides SRST and/or TRST through the JTAG connector, you must declare that so those signals can be used.
- The combination is an optional value specifying broken reset signal implementations. The default behaviour if no option given is ‘separate’, indicating everything behaves normally. ‘srst_pulls_trst’ states that the test logic is reset together with the reset of the system (e.g. NXP LPC2000, "broken" board layout), ‘trst_pulls_srst’ says that the system is reset together with the test logic (only hypothetical, I haven’t seen hardware with such a bug, and can be worked around). ‘combined’ implies both ‘srst_pulls_trst’ and ‘trst_pulls_srst’.
- The gates tokens control flags that describe some cases where JTAG may be unvailable during reset. ‘srst_gates_jtag’ (default) indicates that asserting SRST gates the JTAG clock. This means that no communication can happen on JTAG while SRST is asserted. Its converse is ‘srst_nogate’, indicating that JTAG commands can safely be issued while SRST is active.
The optional trst_type and srst_type parameters allow the driver mode of each reset line to be specified. These values only affect JTAG interfaces with support for different driver modes, like the Amontec JTAGkey and JTAG Accelerator. Also, they are necessarily ignored if the relevant signal (TRST or SRST) is not connected.
- Possible trst_type driver modes for the test reset signal (TRST) are the default ‘trst_push_pull’, and ‘trst_open_drain’. Most boards connect this signal to a pulldown, so the JTAG TAPs never leave reset unless they are hooked up to a JTAG adapter.
- Possible srst_type driver modes for the system reset signal (SRST) are the default ‘srst_open_drain’, and ‘srst_push_pull’. Most boards connect this signal to a pullup, and allow the signal to be pulled low by various events including system powerup and pressing a reset button.
9.4 Custom Reset Handling
OpenOCD has several ways to help support the various reset mechanisms provided by chip and board vendors. The commands shown in the previous section give standard parameters. There are also event handlers associated with TAPs or Targets. Those handlers are Tcl procedures you can provide, which are invoked at particular points in the reset sequence.
When SRST is not an option you must set up a
reset-assert event handler for your target. For example, some JTAG adapters don’t include the SRST signal; and some boards have multiple targets, and you won’t always want to reset everything at once.
After configuring those mechanisms, you might still find your board doesn’t start up or reset correctly. For example, maybe it needs a slightly different sequence of SRST and/or TRST manipulations, because of quirks that the
reset_config mechanism doesn’t address; or asserting both might trigger a stronger reset, which needs special attention.
Experiment with lower level operations, such as
jtag_reset and the
jtag arp_* operations shown here, to find a sequence of operations that works. See section [#JTAG-Commands JTAG Commands]. When you find a working sequence, it can be used to override
jtag_init, which fires during OpenOCD startup (see [#Configuration-Stage Configuration Stage]); or
init_reset, which fires during reset processing.
You might also want to provide some project-specific reset schemes. For example, on a multi-target board the standard
reset command would reset all targets, but you may need the ability to reset only one target at time and thus want to avoid using the board-wide SRST signal.
- Overridable Procedure: init_reset mode
- This is invoked near the beginning of the
resetcommand, usually to provide as much of a cold (power-up) reset as practical. By default it is also invoked from
jtag_initif the scan chain does not respond to pure JTAG operations. The mode parameter is the parameter given to the low level reset command (‘halt’, ‘init’, or ‘run’), ‘setup’, or potentially some other value.
The default implementation just invokes
jtag arp_init-reset. Replacements will normally build on low level JTAG operations such as
jtag_reset. Operations here must not address individual TAPs (or their associated targets) until the JTAG scan chain has first been verified to work.
Implementations must have verified the JTAG scan chain before they return. This is done by calling
jtag arp_init (or
jtag arp_init-reset).
- Command: jtag arp_init
- This validates the scan chain using just the four standard JTAG signals (TMS, TCK, TDI, TDO). It starts by issuing a JTAG-only reset. Then it performs checks to verify that the scan chain configuration matches the TAPs it can observe. Those checks include checking IDCODE values for each active TAP, and verifying the length of their instruction registers using TAP
-ircaptureand
-irmaskvalues. If these tests all pass, TAP
setupevents are issued to all TAPs with handlers for that event.
- Command: jtag arp_init-reset
- This uses TRST and SRST to try resetting everything on the JTAG scan chain (and anything else connected to SRST). It then invokes the logic of
jtag arp_init.
10. TAP Declaration
Test Access Ports (TAPs) are the core of JTAG. TAPs serve many roles, including:
- Debug Target A CPU TAP can be used as a GDB debug target
- Flash Programing Some chips program the flash directly via JTAG. Others do it indirectly, making a CPU do it.
- Program Download Using the same CPU support GDB uses, you can initialize a DRAM controller, download code to DRAM, and then start running that code.
- Boundary Scan Most chips support boundary scan, which helps test for board assembly problems like solder bridges and missing connections
OpenOCD must know about the active TAPs on your board(s). Setting up the TAPs is the core task of your configuration files. Once those TAPs are set up, you can pass their names to code which sets up CPUs and exports them as GDB targets, probes flash memory, performs low-level JTAG operations, and more.
10.1 Scan Chains
TAPs are part of a hardware scan chain, which is daisy chain of TAPs. They also need to be added to OpenOCD’s software mirror of that hardware list, giving each member a name and associating other data with it. Simple scan chains, with a single TAP, are common in systems with a single microcontroller or microprocessor. More complex chips may have several TAPs internally. Very complex scan chains might have a dozen or more TAPs: several in one chip, more in the next, and connecting to other boards with their own chips and TAPs.
You can display the list with the
scan_chain command. (Don’t confuse this with the list displayed by the
targets command, presented in the next chapter. That only displays TAPs for CPUs which are configured as debugging targets.) Here’s what the scan chain might look like for a chip more than one TAP:
TapName Enabled IdCode Expected IrLen IrCap IrMask -- ------------------ ------- ---------- ---------- ----- ----- ------ 0 omap5912.dsp Y 0x03df1d81 0x03df1d81 38 0x01 0x03 1 omap5912.arm Y 0x0692602f 0x0692602f 4 0x01 0x0f 2 omap5912.unknown Y 0x00000000 0x00000000 8 0x01 0x03
OpenOCD can detect some of that information, but not all of it. See [#Autoprobing Autoprobing]. Unfortunately those TAPs can’t always be autoconfigured, because not all devices provide good support for that. JTAG doesn’t require supporting IDCODE instructions, and chips with JTAG routers may not link TAPs into the chain until they are told to do so.
The configuration mechanism currently supported by OpenOCD requires explicit configuration of all TAP devices using
jtag newtap commands, as detailed later in this chapter. A command like this would declare one tap and name it
chip1.cpu:
Each target configuration file lists the TAPs provided by a given chip. Board configuration files combine all the targets on a board, and so forth. Note that the order in which TAPs are declared is very important. It must match the order in the JTAG scan chain, both inside a single chip and between them. See [#FAQ-TAP-Order FAQ TAP Order].
For example, the ST Microsystems STR912 chip has three separate TAPs[#FOOT5 (5)]. To configure those taps, ‘target/str912.cfg’ includes commands something like this:
Actual config files use a variable instead of literals like ‘str912’, to support more than one chip of each type. See section [#Config-File-Guidelines Config File Guidelines].
- Command: jtag names
- Returns the names of all current TAPs in the scan chain. Use
jtag cgetor
jtag tapisenabledto examine attributes and state of each TAP.
- Command: scan_chain
- Displays the TAPs in the scan chain configuration, and their status. The set of TAPs listed by this command is fixed by exiting the OpenOCD configuration stage, but systems with a JTAG router can enable or disable TAPs dynamically.
10.2 TAP Names
When TAP objects are declared with
jtag newtap, a dotted.name is created for the TAP, combining the name of a module (usually a chip) and a label for the TAP. For example:
xilinx.tap,
str912.flash,
omap3530.jrc,
dm6446.dsp, or
stm32.cpu. Many other commands use that dotted.name to manipulate or refer to the TAP. For example, CPU configuration uses the name, as does declaration of NAND or NOR flash banks.
The components of a dotted name should follow “C†symbol name rules: start with an alphabetic character, then numbers and underscores are OK; while others (including dots!) are not.
Tip: In older code, JTAG TAPs were numbered from 0..N. This feature is still present. However its use is highly discouraged, and should not be relied on; it will be removed by mid-2010. Update all of your scripts to use TAP names rather than numbers, by paying attention to the runtime warnings they trigger. Using TAP numbers in target configuration scripts prevents reusing those scripts on boards with multiple targets.
10.3 TAP Declaration Commands
- Command: jtag newtap chipname tapname configparams...
- Declares a new TAP with the dotted name chipname.tapname, and configured according to the various configparams.
The chipname is a symbolic name for the chip. Conventionally target config files use
$_CHIPNAME, defaulting to the model name given by the chip vendor but overridable.
The tapname reflects the role of that TAP, and should follow this convention:
bs– For boundary scan if this is a seperate TAP;
cpu– The main CPU of the chip, alternatively
armand
dspon chips with both ARM and DSP CPUs,
arm1and
arm2on chips two ARMs, and so forth;
etb– For an embedded trace buffer (example: an ARM ETB11);
flash– If the chip has a flash TAP, like the str912;
jrc– For JTAG route controller (example: the ICEpick modules on many Texas Instruments chips, like the OMAP3530 on Beagleboards);
tap– Should be used only FPGA or CPLD like devices with a single TAP;
unknownN– If you have no idea what the TAP is for (N is a number);
- when in doubt – Use the chip maker’s name in their data sheet. For example, the Freescale IMX31 has a SDMA (Smart DMA) with a JTAG TAP; that TAP should be named
sdma.
Every TAP requires at least the following configparams:
-irlenNUMBER
The length in bits of the instruction register, such as 4 or 5 bits.
A TAP may also provide optional configparams:
-disable(or
-enable)
Use the
-disableparameter to flag a TAP which is not linked in to the scan chain after a reset using either TRST or the JTAG state machine’s RESET state. You may use
-enableto highlight the default state (the TAP is linked in). See [#Enabling-and-Disabling-TAPs Enabling and Disabling TAPs].
-expected-idnumber
A non-zero number represents a 32-bit IDCODE which you expect to find when the scan chain is examined. These codes are not required by all JTAG devices. Repeat the option as many times as required if more than one ID code could appear (for example, multiple versions). Specify number as zero to suppress warnings about IDCODE values that were found but not included in the list.
Provide this value if at all possible, since it lets OpenOCD tell when the scan chain it sees isn’t right. These values are provided in vendors’ chip documentation, usually a technical reference manual. Sometimes you may need to probe the JTAG hardware to find these values. See [#Autoprobing Autoprobing].
-ignore-version
Specify this to ignore the JTAG version field in the
-expected-idoption. When vendors put out multiple versions of a chip, or use the same JTAG-level ID for several largely-compatible chips, it may be more practical to ignore the version field than to update config files to handle all of the various chip IDs. The version field is defined as bit 28-31 of the IDCODE.
-ircaptureNUMBER
The bit pattern loaded by the TAP into the JTAG shift register on entry to the IRCAPTURE state, such as 0x01. JTAG requires the two LSBs of this value to be 01. By default,
-ircaptureand
-irmaskare set up to verify that two-bit value. You may provide additional bits, if you know them, or indicate that a TAP doesn’t conform to the JTAG specification.
-irmaskNUMBER
A mask used with
-ircaptureto verify that instruction scans work correctly. Such scans are not used by OpenOCD except to verify that there seems to be no problems with JTAG scan chain operations.
10.4 Other TAP commands
- Command: jtag cget dotted.name ‘-event’ name
- Command: jtag configure dotted.name ‘-event’ name string
- At this writing this TAP attribute mechanism is used only for event handling. (It is not a direct analogue of the
cget/
configuremechanism for debugger targets.) See the next section for information about the available events.
The
configure subcommand assigns an event handler, a TCL string which is evaluated when the event is triggered. The
cget subcommand returns that handler.
10.5 TAP Events
OpenOCD includes two event mechanisms. The one presented here applies to all JTAG TAPs. The other applies to debugger targets, which are associated with certain TAPs.
The TAP events currently defined are:
- post-reset
The TAP has just completed a JTAG reset. The tap may still be in the JTAG RESET state. Handlers for these events might perform initialization sequences such as issuing TCK cycles, TMS sequences to ensure exit from the ARM SWD mode, and more.
Because the scan chain has not yet been verified, handlers for these events should not issue commands which scan the JTAG IR or DR registers of any particular target. NOTE: As this is written (September 2009), nothing prevents such access.
- setup
The scan chain has been reset and verified. This handler may enable TAPs as needed.
- tap-disable
The TAP needs to be disabled. This handler should implement
jtag tapdisableby issuing the relevant JTAG commands.
- tap-enable
The TAP needs to be enabled. This handler should implement
jtag tapenableby issuing the relevant JTAG commands.
If you need some action after each JTAG reset, which isn’t actually specific to any TAP (since you can’t yet trust the scan chain’s contents to be accurate), you might:
10.6 Enabling and Disabling TAPs
In some systems, a JTAG Route Controller (JRC) is used to enable and/or disable specific JTAG TAPs. Many ARM based chips from Texas Instruments include an “ICEpick†module, which is a JRC. Such chips include DaVinci and OMAP3 processors.
A given TAP may not be visible until the JRC has been told to link it into the scan chain; and if the JRC has been told to unlink that TAP, it will no longer be visible. Such routers address problems that JTAG “bypass mode†ignores, such as:
- The scan chain can only go as fast as its slowest TAP.
- Having many TAPs slows instruction scans, since all TAPs receive new instructions.
- TAPs in the scan chain must be powered up, which wastes power and prevents debugging some power management mechanisms.
The IEEE 1149.1 JTAG standard has no concept of a “disabled†tap, as implied by the existence of JTAG routers. However, the upcoming IEEE 1149.7 framework (layered on top of JTAG) does include a kind of JTAG router functionality.
In OpenOCD, tap enabling/disabling is invoked by the Tcl commands shown below, and is implemented using TAP event handlers. So for example, when defining a TAP for a CPU connected to a JTAG router, your ‘target.cfg’ file should define TAP event handlers using code that looks something like this:
Then you might want that CPU’s TAP enabled almost all the time:
Note how that particular setup event handler declaration uses quotes to evaluate
$CHIP when the event is configured. Using brackets { } would cause it to be evaluated later, at runtime, when it might have a different value.
- Command: jtag tapdisable dotted.name
- If necessary, disables the tap by sending it a ‘tap-disable’ event. Returns the string "1" if the tap specified by dotted.name is enabled, and "0" if it is disabled.
- Command: jtag tapenable dotted.name
- If necessary, enables the tap by sending it a ‘tap-enable’ event. Returns the string "1" if the tap specified by dotted.name is enabled, and "0" if it is disabled.
- Command: jtag tapisenabled dotted.name
- Returns the string "1" if the tap specified by dotted.name is enabled, and "0" if it is disabled.
Note: Humans will find the
scan_chaincommand more helpful for querying the state of the JTAG taps.
10.7 Autoprobing
TAP configuration is the first thing that needs to be done after interface and reset configuration. Sometimes it’s hard finding out what TAPs exist, or how they are identified. Vendor documentation is not always easy to find and use.
To help you get past such problems, OpenOCD has a limited autoprobing ability to look at the scan chain, doing a blind interrogation and then reporting the TAPs it finds. To use this mechanism, start the OpenOCD server with only data that configures your JTAG interface, and arranges to come up with a slow clock (many devices don’t support fast JTAG clocks right when they come out of reset).
For example, your ‘openocd.cfg’ file might have:
When you start the server without any TAPs configured, it will attempt to autoconfigure the TAPs. There are two parts to this:
- TAP discovery ... After a JTAG reset (sometimes a system reset may be needed too), each TAP’s data registers will hold the contents of either the IDCODE or BYPASS register. If JTAG communication is working, OpenOCD will see each TAP, and report what ‘-expected-id’ to use with it.
- IR Length discovery ... Unfortunately JTAG does not provide a reliable way to find out the value of the ‘-irlen’ parameter to use with a TAP that is discovered. If OpenOCD can discover the length of a TAP’s instruction register, it will report it. Otherwise you may need to consult vendor documentation, such as chip data sheets or BSDL files.
In many cases your board will have a simple scan chain with just a single device. Here’s what OpenOCD reported with one board that’s a bit more complex:
Given that information, you should be able to either find some existing config files to use, or create your own. If you create your own, you would configure from the bottom up: first a ‘target.cfg’ file with these TAPs, any targets associated with them, and any on-chip resources; then a ‘board.cfg’ with off-chip resources, clocking, and so forth.
11. CPU Configuration
This chapter discusses how to set up GDB debug targets for CPUs. You can also access these targets without GDB (see section [#Architecture-and-Core-Commands Architecture and Core Commands], and [#Target-State-handling Target State handling]) and through various kinds of NAND and NOR flash commands. If you have multiple CPUs you can have multiple such targets.
We’ll start by looking at how to examine the targets you have, then look at how to add one more target and how to configure it.
11.1 Target List
All targets that have been set up are part of a list, where each member has a name. That name should normally be the same as the TAP name. You can display the list with the
targets (plural!) command. This display often has only one CPU; here’s what it might look like with more than one:
TargetName Type Endian TapName State -- ------------------ ---------- ------ ------------------ ------------ 0* at91rm9200.cpu arm920t little at91rm9200.cpu running 1 MyTarget cortex_m3 little mychip.foo tap-disabled
One member of that list is the current target, which is implicitly referenced by many commands. It’s the one marked with a
* near the target name. In particular, memory addresses often refer to the address space seen by that current target. Commands like
mdw (memory display words) and
flash erase_address (erase NOR flash blocks) are examples; and there are many more.
Several commands let you examine the list of targets:
- Command: target count
- Note: target numbers are deprecated; don’t use them. They will be removed shortly after August 2010, including this command. Iterate target using
target names, not by counting.
Returns the number of targets, N. The highest numbered target is N - 1.
- Command: target current
- Returns the name of the current target.
- Command: target names
- Lists the names of all current targets in the list.
- Command: target number number
- Note: target numbers are deprecated; don’t use them. They will be removed shortly after August 2010, including this command.
The list of targets is numbered starting at zero. This command returns the name of the target at index number.
- Command: targets [name]
- Note: the name of this command is plural. Other target command names are singular.
With no parameter, this command displays a table of all known targets in a user friendly form. With a parameter, this command sets the current target to the given target with the given name; this is only relevant on boards which have more than one target.
11.2 Target CPU Types and Variants
Each target has a CPU type, as shown in the output of the
targets command. You need to specify that type when calling
target create. The CPU type indicates more than just the instruction set. It also indicates how that instruction set is implemented, what kind of debug support it integrates, whether it has an MMU (and if so, what kind), what core-specific commands may be available (see section [#Architecture-and-Core-Commands Architecture and Core Commands]), and more.
For some CPU types, OpenOCD also defines variants which indicate differences that affect their handling. For example, a particular implementation bug might need to be worked around in some chip versions.
It’s easy to see what target types are supported, since there’s a command to list them. However, there is currently no way to list what target variants are supported (other than by reading the OpenOCD source code).
- Command: target types
- Lists all supported target types. At this writing, the supported CPU types and variants are:
arm11– this is a generation of ARMv6 cores
arm720t– this is an ARMv4 core with an MMU
arm7tdmi– this is an ARMv4 core
arm920t– this is an ARMv4 core with an MMU
arm926ejs– this is an ARMv5 core with an MMU
arm966e– this is an ARMv5 core
arm9tdmi– this is an ARMv4 core
avr– implements Atmel’s 8-bit AVR instruction set. (Support for this is preliminary and incomplete.)
cortex_a8– this is an ARMv7 core with an MMU
cortex_m3– this is an ARMv7 core, supporting only the compact Thumb2 instruction set.
dragonite– resembles arm966e
dsp563xx– implements Freescale’s 24-bit DSP. (Support for this is still incomplete.)
fa526– resembles arm920 (w/o Thumb)
feroceon– resembles arm926
mips_m4k– a MIPS core. This supports one variant:
xscale– this is actually an architecture, not a CPU type. It is based on the ARMv5 architecture. There are several variants defined:
- -
ixp42x,
ixp45x,
ixp46x,
pxa27x... instruction register length is 7 bits
- -
pxa250,
pxa255,
pxa26x... instruction register length is 5 bits
- -
pxa3xx... instruction register length is 11 bits
To avoid being confused by the variety of ARM based cores, remember this key point: ARM is a technology licencing company. (See:.) The CPU name used by OpenOCD will reflect the CPU design that was licenced, not a vendor brand which incorporates that design. Name prefixes like arm7, arm9, arm11, and cortex reflect design generations; while names like ARMv4, ARMv5, ARMv6, and ARMv7 reflect an architecture version implemented by a CPU design.
11.3 Target Configuration
Before creating a “targetâ€, you must have added its TAP to the scan chain. When you’ve added that TAP, you will have a
dotted.name which is used to set up the CPU support. The chip-specific configuration file will normally configure its CPU(s) right after it adds all of the chip’s TAPs to the scan chain.
Although you can set up a target in one step, it’s often clearer if you use shorter commands and do it in two steps: create it, then configure optional parts. All operations on the target after it’s created will use a new command, created as part of target creation.
The two main things to configure after target creation are a work area, which usually has target-specific defaults even if the board setup code overrides them later; and event handlers (see [#Target-Events Target Events]), which tend to be much more board-specific. The key steps you use might look something like this
You should specify a working area if you can; typically it uses some on-chip SRAM. Such a working area can speed up many things, including bulk writes to target memory; flash operations like checking to see if memory needs to be erased; GDB memory checksumming; and more.
Warning: On more complex chips, the work area can become inaccessible when application code (such as an operating system) enables or disables the MMU. For example, the particular MMU context used to acess the virtual address will probably matter ... and that context might not have easy access to other addresses needed. At this writing, OpenOCD doesn’t have much MMU intelligence.
It’s often very useful to define a
reset-init event handler. For systems that are normally used with a boot loader, common tasks include updating clocks and initializing memory controllers. That may be needed to let you write the boot loader into flash, in order to “de-brick†your board; or to load programs into external DDR memory without having run the boot loader.
- Command: target create target_name type configparams...
- This command creates a GDB debug target that refers to a specific JTAG tap. It enters that target into a list, and creates a new command (
target_name) which is used for various purposes including additional configuration.
- target_name ... is the name of the debug target. By convention this should be the same as the dotted.name of the TAP associated with this target, which must be specified here using the
-chain-position dotted.nameconfigparam.
This name is also used to create the target object command, referred to here as
$target_name, and in other places the target needs to be identified.
- type ... specifies the target type. See [#target-types target types].
- configparams ... all parameters accepted by
$target_name configureare permitted. If the target is big-endian, set it here with
-endian big. If the variant matters, set it here with
-variant.
You must set the
-chain-position dotted.name here.
- Command: $target_name configure configparams...
- The options accepted by this command may also be specified as parameters to
target create. Their values can later be queried one at a time by using the
$target_name cgetcommand.
Warning: changing some of these after setup is dangerous. For example, moving a target from one TAP to another; and changing its endianness or variant.
-chain-positiondotted.name – names the TAP used to access this target.
-endian(‘big’|‘little’) – specifies whether the CPU uses big or little endian conventions
-eventevent_name event_body – See [#Target-Events Target Events]. Note that this updates a list of named event handlers. Calling this twice with two different event names assigns two different handlers, but calling it twice with the same event name assigns only one handler.
-variantname – specifies a variant of the target, which OpenOCD needs to know about.
-work-area-backup(‘0’|‘1’) – says whether the work area gets backed up; by default, it is not backed up. When possible, use a working_area that doesn’t need to be backed up, since performing a backup slows down operations. For example, the beginning of an SRAM block is likely to be used by most build systems, but the end is often unused.
-work-area-sizesize – specify work are size, in bytes. The same size applies regardless of whether its physical or virtual address is being used.
-work-area-physaddress – set the work area base address to be used when no MMU is active.
-work-area-virtaddress – set the work area base address to be used when an MMU is active. Do not specify a value for this except on targets with an MMU. The value should normally correspond to a static mapping for the
-work-area-physaddress, set up by the current operating system.
11.4 Other $target_name Commands
The Tcl/Tk language has the concept of object commands, and OpenOCD adopts that same model for targets.
A good Tk example is a on screen button. Once a button is created a button has a name (a path in Tk terms) and that name is useable as a first class command. For example in Tk, one can create a button and later configure it like this:
In OpenOCD’s terms, the “target†is an object just like a Tcl/Tk button, and its object commands are invoked the same way.
The commands supported by OpenOCD target objects are:
- Command: $target_name arp_examine
- Command: $target_name arp_halt
- Command: $target_name arp_poll
- Command: $target_name arp_reset
- Command: $target_name arp_waitstate
- Internal OpenOCD scripts (most notably ‘startup.tcl’) use these to deal with specific reset cases. They are not otherwise documented here.
- Command: $target_name array2mem arrayname width address count
- Command: $target_name mem2array arrayname width address count
- These provide an efficient script-oriented interface to memory. The
array2memprimitive writes bytes, halfwords, or words; while
mem2arrayreads them. In both cases, the TCL side uses an array, and the target side uses raw memory.
The efficiency comes from enabling the use of bulk JTAG data transfer operations. The script orientation comes from working with data values that are packaged for use by TCL scripts;
mdw type primitives only print data they retrieve, and neither store nor return those values.
- arrayname ... is the name of an array variable
- width ... is 8/16/32 - indicating the memory access size
- address ... is the target memory address
- count ... is the number of elements to process
- Command: $target_name cget queryparm
- Each configuration parameter accepted by
$target_name configurecan be individually queried, to return its current value. The queryparm is a parameter name accepted by that command, such as
-work-area-phys. There are a few special cases:
-eventevent_name – returns the handler for the event named event_name. This is a special case because setting a handler requires two parameters.
-type– returns the target type. This is a special case because this is set using
target createand can’t be changed using
$target_name configure.
For example, if you wanted to summarize information about all the targets you might use something like this:
- Command: $target_name curstate
- Displays the current target state:
debug-running,
halted,
reset,
running, or
unknown. (Also, see [#Event-Polling Event Polling].)
- Command: $target_name eventlist
- Displays a table listing all event handlers currently associated with this target. See [#Target-Events Target Events].
- Command: $target_name invoke-event event_name
- Invokes the handler for the event named event_name. (This is primarily intended for use by OpenOCD framework code, for example by the reset code in ‘startup.tcl’.)
- Command: $target_name mdw addr [count]
- Command: $target_name mdh addr [count]
- Command: $target_name mdb addr [count]
- Display contents of address addr, as 32-bit words (
mdw), 16-bit halfwords (
mdh), or 8-bit bytes (
mdb). If count is specified, displays that many units. (If you want to manipulate the data instead of displaying it, see the
mem2arrayprimitives.)
- Command: $target_name mww addr word
- Command: $target_name mwh addr halfword
- Command: $target_name mwb addr byte
- Writes the specified word (32 bits), halfword (16 bits), or byte (8-bit) pattern, at the specified address addr.
11.5 Target Events
At various times, certain things can happen, or you want them to happen. For example:
- What should happen when GDB connects? Should your target reset?
- When GDB tries to flash the target, do you need to enable the flash via a special command?
- Is using SRST appropriate (and possible) on your system? Or instead of that, do you need to issue JTAG commands to trigger reset? SRST usually resets everything on the scan chain, which can be inappropriate.
- During reset, do you need to write to certain memory locations to set up system clocks or to reconfigure the SDRAM? How about configuring the watchdog timer, or other peripherals, to stop running while you hold the core stopped for debugging?
All of the above items can be addressed by target event handlers. These are set up by
$target_name configure -event or
target create ... -event.
The programmer’s model matches the
-command option used in Tcl/Tk buttons and events. The two examples below act the same, but one creates and invokes a small procedure while the other inlines it.
The following target events are defined:
- debug-halted
The target has halted for debug reasons (i.e.: breakpoint)
- debug-resumed
The target has resumed (i.e.: gdb said run)
- early-halted
Occurs early in the halt process
- gdb-attach
When GDB connects. This is before any communication with the target, so this can be used to set up the target so it is possible to probe flash. Probing flash is necessary during gdb connect if gdb load is to write the image to flash. Another use of the flash memory map is for GDB to automatically hardware/software breakpoints depending on whether the breakpoint is in RAM or read only memory.
- gdb-detach
When GDB disconnects
- gdb-end
When the target has halted and GDB is not doing anything (see early halt)
- gdb-flash-erase-start
Before the GDB flash process tries to erase the flash
- gdb-flash-erase-end
After the GDB flash process has finished erasing the flash
- gdb-flash-write-start
Before GDB writes to the flash
- gdb-flash-write-end
After GDB writes to the flash
- gdb-start
Before the target steps, gdb is trying to start/resume the target
- halted
The target has halted
- reset-assert-pre
Issued as part of
resetprocessing after
reset_initwas triggered but before either SRST alone is re-asserted on the scan chain, or
reset-assertis triggered.
- reset-assert
Issued as part of
resetprocessing after
reset-assert-prewas triggered. When such a handler is present, cores which support this event will use it instead of asserting SRST. This support is essential for debugging with JTAG interfaces which don’t include an SRST line (JTAG doesn’t require SRST), and for selective reset on scan chains that have multiple targets.
- reset-assert-post
Issued as part of
resetprocessing after
reset-asserthas been triggered. or the target asserted SRST on the entire scan chain.
- reset-deassert-pre
Issued as part of
resetprocessing after
reset-assert-posthas been triggered.
- reset-deassert-post
Issued as part of
resetprocessing after
reset-deassert-prehas been triggered and (if the target is using it) after SRST has been released on the scan chain.
- reset-end
Issued as the final step in
resetprocessing.
- reset-init
Used by reset init command for board-specific initialization. This event fires after reset-deassert-post.
This is where you would configure PLLs and clocking, set up DRAM so you can download programs that don’t fit in on-chip SRAM, set up pin multiplexing, and so on. (You may be able to switch to a fast JTAG clock rate here, after the target clocks are fully set up.)
- reset-start
Issued as part of
resetprocessing before
reset_initis called.
This is the most robust place to use
jtag_rclk or
adapter_khz to switch to a low JTAG clock rate, when reset disables PLLs needed to use a fast clock.
- resume-start
Before any target is resumed
- resume-end
After all targets have resumed
- resume-ok
Success
- resumed
Target has resumed
12. Flash Commands
OpenOCD has different commands for NOR and NAND flash; the “flash†command works with NOR flash, while the “nand†command works with NAND flash. This partially reflects different hardware technologies: NOR flash usually supports direct CPU instruction and data bus access, while data from a NAND flash must be copied to memory before it can be used. (SPI flash must also be copied to memory before use.) However, the documentation also uses “flash†as a generic term; for example, “Put flash configuration in board-specific filesâ€.
Flash Steps:
- Configure via the command
flash bank
Do this in a board-specific configuration file, passing parameters as needed by the driver.
- Operate on the flash via
flash subcommand
Often commands to manipulate the flash are typed by a human, or run via a script in some automated way. Common tasks include writing a boot loader, operating system, or other data.
- GDB Flashing
Flashing via GDB requires the flash be configured via “flash bankâ€, and the GDB flash features be enabled. See [#GDB-Configuration GDB Configuration].
Many CPUs have the ablity to “boot†from the first flash bank. This means that misprogramming that bank can “brick†a system, so that it can’t boot. JTAG tools, like OpenOCD, are often then used to “de-brick†the board by (re)installing working boot firmware.
12.1 Flash Configuration Commands
- Config Command: flash bank name driver base size chip_width bus_width target [driver_options]
- Configures a flash bank which provides persistent storage for addresses from base to base size - 1. These banks will often be visible to GDB through the target’s memory map. In some cases, configuring a flash bank will activate extra commands; see the driver-specific documentation.
- name ... may be used to reference the flash bank in other flash commands. A number is also available.
- driver ... identifies the controller driver associated with the flash bank being declared. This is usually
cfifor external flash, or else the name of a microcontroller with embedded flash memory. See [#Flash-Driver-List Flash Driver List].
- base ... Base address of the flash chip.
- size ... Size of the chip, in bytes. For some drivers, this value is detected from the hardware.
- chip_width ... Width of the flash chip, in bytes; ignored for most microcontroller drivers.
- bus_width ... Width of the data bus used to access the chip, in bytes; ignored for most microcontroller drivers.
- target ... Names the target used to issue commands to the flash controller.
- driver_options ... drivers may support, or require, additional parameters. See the driver-specific documentation for more information.
Note: This command is not available after OpenOCD initialization has completed. Use it in board specific configuration files, not interactively.
- Command: flash banks
- Prints a one-line summary of each device that was declared using
flash bank, numbered from zero. Note that this is the plural form; the singular form is a very different command.
- Command: flash list
- Retrieves a list of associative arrays for each device that was declared using
flash bank, numbered from zero. This returned list can be manipulated easily from within scripts.
- Command: flash probe num
- Identify the flash, or validate the parameters of the configured flash. Operation depends on the flash type. The num parameter is a value shown by
flash banks. Most flash commands will implicitly autoprobe the bank; flash drivers can distinguish between probing and autoprobing, but most don’t bother.
12.2 Erasing, Reading, Writing to Flash
One feature distinguishing NOR flash from NAND or serial flash technologies is that for read access, it acts exactly like any other addressible memory. This means you can use normal memory read commands like
mdw or
dump_image with it, with no special
flash subcommands. See [#Memory-access Memory access], and [#Image-access Image access].
Write access works differently. Flash memory normally needs to be erased before it’s written. Erasing a sector turns all of its bits to ones, and writing can turn ones into zeroes. This is why there are special commands for interactive erasing and writing, and why GDB needs to know which parts of the address space hold NOR flash memory.
Note: Most of these erase and write commands leverage the fact that NOR flash chips consume target address space. They implicitly refer to the current JTAG target, and map from an address in that target’s address space back to a flash bank. A few commands use abstract addressing based on bank and sector numbers, and don’t depend on searching the current target and its address space. Avoid confusing the two command models.
Some flash chips implement software protection against accidental writes, since such buggy writes could in some cases “brick†a system. For such systems, erasing and writing may require sector protection to be disabled first. Examples include CFI flash such as “Intel Advanced Bootblock flashâ€, and AT91SAM7 on-chip flash. See [#flash-protect flash protect].
- Command: flash erase_sector num first last
- Erase sectors in bank num, starting at sector first up to and including last. Sector numbering starts at 0. Providing a last sector of ‘last’ specifies "to the end of the flash bank". The num parameter is a value shown by
flash banks.
- Command: flash erase_address [‘pad’] [‘unlock’] address length
- Erase sectors starting at address for length bytes. Unless ‘pad’ is specified, address must begin a flash sector, and address length - 1 must end a sector. Specifying ‘pad’ erases extra data at the beginning and/or end of the specified region, as needed to erase only full sectors. The flash bank to use is inferred from the address, and the specified length must stay within that bank. As a special case, when length is zero and address is the start of the bank, the whole flash is erased. If ‘unlock’ is specified, then the flash is unprotected before erase starts.
- Command: flash fillw address word length
- Command: flash fillh address halfword length
- Command: flash fillb address byte length
- Fills flash memory with the specified word (32 bits), halfword (16 bits), or byte (8-bit) pattern, starting at address and continuing for length units (word/halfword/byte). No erasure is done before writing; when needed, that must be done before issuing this command. Writes are done in blocks of up to 1024 bytes, and each write is verified by reading back the data and comparing it to what was written. The flash bank to use is inferred from the address of each block, and the specified length must stay within that bank.
- Command: flash write_bank num filename offset
- Write the binary ‘filename’ to flash bank num, starting at offset bytes from the beginning of the bank. The num parameter is a value shown by
flash banks.
- Command: flash write_image [erase] [unlock] filename [offset] [type]
- Write the image ‘filename’ to the current target’s flash bank(s). A relocation offset may be specified, in which case it is added to the base address for each section in the image. The file [type] can be specified explicitly. The relevant flash sectors will be erased prior to programming if the ‘erase’ parameter is given. If ‘unlock’ is provided, then the flash banks are unlocked before erase and program. The flash bank to use is inferred from the address of each image section.
Warning: Be careful using the ‘erase’ flag when the flash is holding data you want to preserve. Portions of the flash outside those described in the image’s sections might be erased with no notice.
- When a section of the image being written does not fill out all the sectors it uses, the unwritten parts of those sectors are necessarily also erased, because sectors can’t be partially erased.
- Data stored in sector "holes" between image sections are also affected. For example, "
flash write_image erase ..." of an image with one byte at the beginning of a flash bank and one byte at the end erases the entire bank – not just the two sectors being written.
Also, when flash protection is important, you must re-apply it after it has been removed by the ‘unlock’ flag.
12.3 Other Flash commands
- Command: flash erase_check num
- Check erase state of sectors in flash bank num, and display that status. The num parameter is a value shown by
flash banks.
- Command: flash info num
- Print info about flash bank num The num parameter is a value shown by
flash banks. This command will first query the hardware, it does not print cached and possibly stale information.
- Command: flash protect num first last (‘on’|‘off’)
- Enable (‘on’) or disable (‘off’) protection of flash sectors in flash bank num, starting at sector first and continuing up to and including last. Providing a last sector of ‘last’ specifies "to the end of the flash bank". The num parameter is a value shown by
flash banks.
12.4 Flash Driver List
As noted above, the
flash bank command requires a driver name, and allows driver-specific options and behaviors. Some drivers also activate driver-specific commands.
12.4.1 External Flash
- Flash Driver: cfi
- The “Common Flash Interface†(CFI) is the main standard for external NOR flash chips, each of which connects to a specific external chip select on the CPU. Frequently the first such chip is used to boot the system. Your board’s
reset-inithandler might need to configure additional chip selects using other commands (like:
mwwto configure a bus and its timings), or perhaps configure a GPIO pin that controls the “write protect†pin on the flash chip. The CFI driver can use a target-specific working area to significantly speed up operation.
The CFI driver can accept the following optional parameters, in any order:
- jedec_probe ... is used to detect certain non-CFI flash ROMs, like AM29LV010 and similar types.
- x16_as_x8 ... when a 16-bit flash is hooked up to an 8-bit bus.
To configure two adjacent banks of 16 MBytes each, both sixteen bits (two bytes) wide on a sixteen bit bus:
To configure one bank of 32 MBytes built from two sixteen bit (two byte) wide parts wired in parallel to create a thirty-two bit (four byte) bus with doubled throughput:
- Flash Driver: stmsmi
- Some devices form STMicroelectronics (e.g. STR75x MCU family, SPEAr MPU family) include a proprietary “Serial Memory Interface†(SMI) controller able to drive external SPI flash devices. Depending on specific device and board configuration, up to 4 external flash devices can be connected.
SMI makes the flash content directly accessible in the CPU address space; each external device is mapped in a memory bank. CPU can directly read data, execute code and boot from SMI banks. Normal OpenOCD commands like
mdw can be used to display the flash content.
The setup command only requires the base parameter in order to identify the memory bank. All other parameters are ignored. Additional information, like flash size, are detected automatically.
12.4.2 Internal Flash (Microcontrollers)
- Flash Driver: aduc702x
- The ADUC702x analog microcontrollers from Analog Devices include internal flash and use ARM7TDMI cores. The aduc702x flash driver works with models ADUC7019 through ADUC7028. The setup command only requires the target argument since all devices in this family have the same memory layout.
- Flash Driver: at91sam3
- All members of the AT91SAM3 microcontroller family from Atmel include internal flash and use ARM’s Cortex-M3 core. The driver currently (6/22/09) recognizes the AT91SAM3U[1/2/4][C/E] chips. Note that the driver was orginaly developed and tested using the AT91SAM3U4E, using a SAM3U-EK eval board. Support for other chips in the family was cribbed from the data sheet. Note to future readers/updaters: Please remove this worrysome comment after other chips are confirmed.
The AT91SAM3U4[E/C] (256K) chips have two flash banks; most other chips have one flash bank. In all cases the flash banks are at the following fixed locations:
Internally, the AT91SAM3 flash memory is organized as follows. Unlike the AT91SAM7 chips, these are not used as parameters to the
flash bank command:
- N-Banks: 256K chips have 2 banks, others have 1 bank.
- Bank Size: 128K/64K Per flash bank
- Sectors: 16 or 8 per bank
- SectorSize: 8K Per Sector
- PageSize: 256 bytes per page. Note that OpenOCD operates on ’sector’ sizes, not page sizes.
The AT91SAM3 driver adds some additional commands:
- Command: at91sam3 gpnvm
- Command: at91sam3 gpnvm clear number
- Command: at91sam3 gpnvm set number
- Command: at91sam3 gpnvm show [‘all’|number]
- With no parameters,
showor
show all, shows the status of all GPNVM bits. With
shownumber, displays that bit.
With
set number or
clear number, modifies that GPNVM bit.
- Command: at91sam3 info
- This command attempts to display information about the AT91SAM3 chip. First it read the
CHIPID_CIDR[address 0x400e0740, see Section 28.2.1, page 505 of the AT91SAM3U 29/may/2009 datasheet, document id: doc6430A] and decodes the values. Second it reads the various clock configuration registers and attempts to display how it believes the chip is configured. By default, the SLOWCLK is assumed to be 32768 Hz, see the command
at91sam3 slowclk.
- Command: at91sam3 slowclk [value]
- This command shows/sets the slow clock frequency used in the
at91sam3 infocommand calculations above.
- Flash Driver: at91sam4
- All members of the AT91SAM4 microcontroller family from Atmel include internal flash and use ARM’s Cortex-M4 core. This driver uses the same cmd names/syntax as See [#at91sam3 at91sam3].
- Flash Driver: at91sam7
- All members of the AT91SAM7 microcontroller family from Atmel include internal flash and use ARM7TDMI cores. The driver automatically recognizes a number of these chips using the chip identification register, and autoconfigures itself.
For chips which are not recognized by the controller driver, you must provide additional parameters in the following order:
- chip_model ... label used with
flash info
- banks
- sectors_per_bank
- pages_per_sector
- pages_size
- num_nvm_bits
- freq_khz ... required if an external clock is provided, optional (but recommended) when the oscillator frequency is known
It is recommended that you provide zeroes for all of those values except the clock frequency, so that everything except that frequency will be autoconfigured. Knowing the frequency helps ensure correct timings for flash access.
The flash controller handles erases automatically on a page (128/256 byte) basis, so explicit erase commands are not necessary for flash programming. However, there is an “EraseAll“ command that can erase an entire flash plane (of up to 256KB), and it will be used automatically when you issue
flash erase_sector or
flash erase_address commands.
- Command: at91sam7 gpnvm bitnum (‘set’|‘clear’)
- Set or clear a “General Purpose Non-Volatile Memory†(GPNVM) bit for the processor. Each processor has a number of such bits, used for controlling features such as brownout detection (so they are not truly general purpose).
Note: This assumes that the first flash bank (number 0) is associated with the appropriate at91sam7 target.
- Flash Driver: avr
- The AVR 8-bit microcontrollers from Atmel integrate flash memory. The current implementation is incomplete.
- Flash Driver: lpc2000
- Most members of the LPC1700 and LPC2000 microcontroller families from NXP include internal flash and use Cortex-M3 (LPC1700) or ARM7TDMI (LPC2000) cores.
Note: There are LPC2000 devices which are not supported by the lpc2000 driver: The LPC2888 is supported by the lpc288x driver. The LPC29xx family is supported by the lpc2900 driver.
The lpc2000 driver defines two mandatory and one optional parameters, which must appear in the following order:
- variant ... required, may be ‘lpc2000_v1’ (older LPC21xx and LPC22xx) ‘lpc2000_v2’ (LPC213x, LPC214x, LPC210[123], LPC23xx and LPC24xx) or ‘lpc1700’ (LPC175x and LPC176x)
- clock_kHz ... the frequency, in kiloHertz, at which the core is running
- ‘calc_checksum’ ... optional (but you probably want to provide this!), telling the driver to calculate a valid checksum for the exception vector table.
Note: If you don’t provide ‘calc_checksum’ when you’re writing the vector table, the boot ROM will almost certainly ignore your flash image. However, if you do provide it, with most tool chains
verify_image will fail.
LPC flashes don’t require the chip and bus width to be specified.
- Command: lpc2000 part_id bank
- Displays the four byte part identifier associated with the specified flash bank.
- Flash Driver: lpc288x
- The LPC2888 microcontroller from NXP needs slightly different flash support from its lpc2000 siblings. The lpc288x driver defines one mandatory parameter, the programming clock rate in Hz. LPC flashes don’t require the chip and bus width to be specified.
- Flash Driver: lpc2900
- This driver supports the LPC29xx ARM968E based microcontroller family from NXP.
The predefined parameters base, size, chip_width and bus_width of the
flash bank command are ignored. Flash size and sector layout are auto-configured by the driver. The driver has one additional mandatory parameter: The CPU clock rate (in kHz) at the time the flash operations will take place. Most of the time this will not be the crystal frequency, but a higher PLL frequency. The
reset-init event handler in the board script is usually the place where you start the PLL.
The driver rejects flashless devices (currently the LPC2930).
The EEPROM in LPC2900 devices is not mapped directly into the address space. It must be handled much more like NAND flash memory, and will therefore be handled by a separate
lpc2900_eeprom driver (not yet available).
Sector protection in terms of the LPC2900 is handled transparently. Every time a sector needs to be erased or programmed, it is automatically unprotected. What is shown as protection status in the
flash info command, is actually the LPC2900 sector security. This is a mechanism to prevent a sector from ever being erased or programmed again. As this is an irreversible mechanism, it is handled by a special command (
lpc2900 secure_sector), and not by the standard
flash protect command.
Example for a 125 MHz clock frequency:
Some
lpc2900-specific commands are defined. In the following command list, the bank parameter is the bank number as obtained by the
flash banks command.
- Command: lpc2900 signature bank
- Calculates a 128-bit hash value, the signature, from the whole flash content. This is a hardware feature of the flash block, hence the calculation is very fast. You may use this to verify the content of a programmed device against a known signature. Example:
- Command: lpc2900 read_custom bank filename
- Reads the 912 bytes of customer information from the flash index sector, and saves it to a file in binary format. Example:
The index sector of the flash is a write-only sector. It cannot be erased! In order to guard against unintentional write access, all following commands need to be preceeded by a successful call to the
password command:
- Command: lpc2900 password bank password
- You need to use this command right before each of the following commands:
lpc2900 write_custom,
lpc2900 secure_sector,
lpc2900 secure_jtag.
The password string is fixed to "I_know_what_I_am_doing". Example:
- Command: lpc2900 write_custom bank filename type
- Writes the content of the file into the customer info space of the flash index sector. The filetype can be specified with the type field. Possible values for type are: bin (binary), ihex (Intel hex format), elf (ELF binary) or s19 (Motorola S-records). The file must contain a single section, and the contained data length must be exactly 912 bytes.
Attention: This cannot be reverted! Be careful!
Example:
- Command: lpc2900 secure_sector bank first last
- Secures the sector range from first to last (including) against further program and erase operations. The sector security will be effective after the next power cycle.
Attention: This cannot be reverted! Be careful!
Secured sectors appear as protected in the
flash info command. Example:
- Command: lpc2900 secure_jtag bank
- Irreversibly disable the JTAG port. The new JTAG security setting will be effective after the next power cycle.
Attention: This cannot be reverted! Be careful!
Examples:
- Flash Driver: ocl
- No idea what this is, other than using some arm7/arm9 core.
- Flash Driver: pic32mx
- The PIC32MX microcontrollers are based on the MIPS 4K cores, and integrate flash memory.
Some pic32mx-specific commands are defined:
- Command: pic32mx pgm_word address value bank
- Programs the specified 32-bit value at the given address in the specified chip bank.
- Command: pic32mx unlock bank
- Unlock and erase specified chip bank. This will remove any Code Protection.
- Flash Driver: stellaris
- All members of the Stellaris LM3Sxxx microcontroller family from Texas Instruments include internal flash and use ARM Cortex M3 cores. The driver automatically recognizes a number of these chips using the chip identification register, and autoconfigures itself. [#FOOT6 (6)]
- Command: stellaris recover bank_id
- Performs the Recovering a "Locked" Device procedure to restore the flash specified by bank_id and its associated nonvolatile registers to their factory default values (erased). This is the only way to remove flash protection or re-enable debugging if that capability has been disabled.
Note that the final "power cycle the chip" step in this procedure must be performed by hand, since OpenOCD can’t do it.
Warning: if more than one Stellaris chip is connected, the procedure is applied to all of them.
- Flash Driver: stm32f1x
- All members of the STM32f1x microcontroller family from ST Microelectronics include internal flash and use ARM Cortex M3 cores. The driver automatically recognizes a number of these chips using the chip identification register, and autoconfigures itself.
If you have a target with dual flash banks then define the second bank as per the following example.
Some stm32f1x-specific commands [#FOOT7 (7)] are defined:
- Command: stm32f1x lock num
- Locks the entire stm32 device. The num parameter is a value shown by
flash banks.
- Command: stm32f1x unlock num
- Unlocks the entire stm32 device. The num parameter is a value shown by
flash banks.
- Command: stm32f1x options_read num
- Read and display the stm32 option bytes written by the
stm32f1x options_writecommand. The num parameter is a value shown by
flash banks.
- Command: stm32f1x options_write num (‘SWWDG’|‘HWWDG’) (‘RSTSTNDBY’|‘NORSTSTNDBY’) (‘RSTSTOP’|‘NORSTSTOP’)
- Writes the stm32 option byte with the specified values. The num parameter is a value shown by
flash banks.
- Flash Driver: stm32f2x
- All members of the STM32f2x microcontroller family from ST Microelectronics include internal flash and use ARM Cortex M3 cores. The driver automatically recognizes a number of these chips using the chip identification register, and autoconfigures itself.
- Flash Driver: str7x
- All members of the STR7 microcontroller family from ST Microelectronics include internal flash and use ARM7TDMI cores. The str7x driver defines one mandatory parameter, variant, which is either
STR71x,
STR73xor
STR75x.
- Command: str7x disable_jtag bank
- Activate the Debug/Readout protection mechanism for the specified flash bank.
- Flash Driver: str9x
- Most members of the STR9 microcontroller family from ST Microelectronics include internal flash and use ARM966E cores. The str9 needs the flash controller to be configured using the
str9x flash_configcommand prior to Flash programming.
- Command: str9x flash_config num bbsr nbbsr bbadr nbbadr
- Configures the str9 flash controller. The num parameter is a value shown by
flash banks.
- bbsr - Boot Bank Size register
- nbbsr - Non Boot Bank Size register
- bbadr - Boot Bank Start Address register
- nbbadr - Boot Bank Start Address register
- Flash Driver: tms470
- Most members of the TMS470 microcontroller family from Texas Instruments include internal flash and use ARM7TDMI cores. This driver doesn’t require the chip and bus width to be specified.
Some tms470-specific commands are defined:
- Command: tms470 flash_keyset key0 key1 key2 key3
- Saves programming keys in a register, to enable flash erase and write commands.
- Command: tms470 osc_mhz clock_mhz
- Reports the clock speed, which is used to calculate timings.
- Command: tms470 plldis (0|1)
- Disables (1) or enables (0) use of the PLL to speed up the flash clock.
- Flash Driver: virtual
- This is a special driver that maps a previously defined bank to another address. All bank settings will be copied from the master physical bank.
The virtual driver defines one mandatory parameters,
- master_bank The bank that this virtual address refers to.
So in the following example addresses 0xbfc00000 and 0x9fc00000 refer to the flash bank defined at address 0x1fc00000. Any cmds executed on the virtual banks are actually performed on the physical banks.
- Flash Driver: fm3
- All members of the FM3 microcontroller family from Fujitsu include internal flash and use ARM Cortex M3 cores. The fm3 driver uses the target parameter to select the correct bank config, it can currently be one of the following:
mb9bfxx1.cpu,
mb9bfxx2.cpu,
mb9bfxx3.cpu,
mb9bfxx4.cpu,
mb9bfxx5.cpuor
mb9bfxx6.cpu.
12.4.3 str9xpec driver
Here is some background info to help you better understand how this driver works. OpenOCD has two flash drivers for the str9:
- Standard driver ‘str9x’ programmed via the str9 core. Normally used for flash programming as it is faster than the ‘str9xpec’ driver.
- Direct programming ‘str9xpec’ using the flash controller. This is an ISC compilant (IEEE 1532) tap connected in series with the str9 core. The str9 core does not need to be running to program using this flash driver. Typical use for this driver is locking/unlocking the target and programming the option bytes.
Before we run any commands using the ‘str9xpec’ driver we must first disable the str9 core. This example assumes the ‘str9xpec’ driver has been configured for flash bank 0.
The above example will read the str9 option bytes. When performing a unlock remember that you will not be able to halt the str9 - it has been locked. Halting the core is not required for the ‘str9xpec’ driver as mentioned above, just issue the commands above manually or from a telnet prompt.
- Flash Driver: str9xpec
- Only use this driver for locking/unlocking the device or configuring the option bytes. Use the standard str9 driver for programming. Before using the flash commands the turbo mode must be enabled using the
str9xpec enable_turbocommand.
Several str9xpec-specific commands are defined:
- Command: str9xpec disable_turbo num
- Restore the str9 into JTAG chain.
- Command: str9xpec enable_turbo num
- Enable turbo mode, will simply remove the str9 from the chain and talk directly to the embedded flash controller.
- Command: str9xpec lock num
- Lock str9 device. The str9 will only respond to an unlock command that will erase the device.
- Command: str9xpec part_id num
- Prints the part identifier for bank num.
- Command: str9xpec options_cmap num (‘bank0’|‘bank1’)
- Configure str9 boot bank.
- Command: str9xpec options_lvdsel num (‘vdd’|‘vdd_vddq’)
- Configure str9 lvd source.
- Command: str9xpec options_lvdthd num (‘2.4v’|‘2.7v’)
- Configure str9 lvd threshold.
- Command: str9xpec options_lvdwarn bank (‘vdd’|‘vdd_vddq’)
- Configure str9 lvd reset warning source.
- Command: str9xpec options_read num
- Read str9 option bytes.
- Command: str9xpec options_write num
- Write str9 option bytes.
- Command: str9xpec unlock num
- unlock str9 device.
12.5 mFlash
12.5.1 mFlash Configuration
- Config Command: mflash bank soc base RST_pin target
- Configures a mflash for soc host bank at address base. The pin number format depends on the host GPIO naming convention. Currently, the mflash driver supports s3c2440 and pxa270.
Example for s3c2440 mflash where RST pin is GPIO B1:
Example for pxa270 mflash where RST pin is GPIO 43:
12.5.2 mFlash commands
- Command: mflash config pll frequency
- Configure mflash PLL. The frequency is the mflash input frequency, in Hz. Issuing this command will erase mflash’s whole internal nand and write new pll. After this command, mflash needs power-on-reset for normal operation. If pll was newly configured, storage and boot(optional) info also need to be update.
- Command: mflash config boot
- Configure bootable option. If bootable option is set, mflash offer the first 8 sectors (4kB) for boot.
- Command: mflash config storage
- Configure storage information. For the normal storage operation, this information must be written.
- Command: mflash dump num filename offset size
- Dump size bytes, starting at offset bytes from the beginning of the bank num, to the file named filename.
- Command: mflash probe
- Probe mflash.
- Command: mflash write num filename offset
- Write the binary file filename to mflash bank num, starting at offset bytes from the beginning of the bank.
13. NAND Flash Commands
Compared to NOR or SPI flash, NAND devices are inexpensive and high density. Today’s NAND chips, and multi-chip modules, commonly hold multiple GigaBytes of data.
NAND chips consist of a number of “erase blocks†of a given size (such as 128 KBytes), each of which is divided into a number of pages (of perhaps 512 or 2048 bytes each). Each page of a NAND flash has an “out of band†(OOB) area to hold Error Correcting Code (ECC) and other metadata, usually 16 bytes of OOB for every 512 bytes of page data.
One key characteristic of NAND flash is that its error rate is higher than that of NOR flash. In normal operation, that ECC is used to correct and detect errors. However, NAND blocks can also wear out and become unusable; those blocks are then marked "bad". NAND chips are even shipped from the manufacturer with a few bad blocks. The highest density chips use a technology (MLC) that wears out more quickly, so ECC support is increasingly important as a way to detect blocks that have begun to fail, and help to preserve data integrity with techniques such as wear leveling.
Software is used to manage the ECC. Some controllers don’t support ECC directly; in those cases, software ECC is used. Other controllers speed up the ECC calculations with hardware. Single-bit error correction hardware is routine. Controllers geared for newer MLC chips may correct 4 or more errors for every 512 bytes of data.
You will need to make sure that any data you write using OpenOCD includes the apppropriate kind of ECC. For example, that may mean passing the
oob_softecc flag when writing NAND data, or ensuring that the correct hardware ECC mode is used.
The basic steps for using NAND devices include:
- Declare via the command
nand device
Do this in a board-specific configuration file, passing parameters as needed by the controller.
- Configure each device using
nand probe.
Do this only after the associated target is set up, such as in its reset-init script or in procures defined to access that device.
- Operate on the flash via
nand subcommand
Often commands to manipulate the flash are typed by a human, or run via a script in some automated way. Common task include writing a boot loader, operating system, or other data needed to initialize or de-brick a board.
NOTE: At the time this text was written, the largest NAND flash fully supported by OpenOCD is 2 GiBytes (16 GiBits). This is because the variables used to hold offsets and lengths are only 32 bits wide. (Larger chips may work in some cases, unless an offset or length is larger than 0xffffffff, the largest 32-bit unsigned integer.) Some larger devices will work, since they are actually multi-chip modules with two smaller chips and individual chipselect lines.
13.1 NAND Configuration Commands
NAND chips must be declared in configuration scripts, plus some additional configuration that’s done after OpenOCD has initialized.
- Config Command: nand device name driver target [configparams...]
- Declares a NAND device, which can be read and written to after it has been configured through
nand probe. In OpenOCD, devices are single chips; this is unlike some operating systems, which may manage multiple chips as if they were a single (larger) device. In some cases, configuring a device will activate extra commands; see the controller-specific documentation.
NOTE: This command is not available after OpenOCD initialization has completed. Use it in board specific configuration files, not interactively.
- name ... may be used to reference the NAND bank in most other NAND commands. A number is also available.
- driver ... identifies the NAND controller driver associated with the NAND device being declared. See [#NAND-Driver-List NAND Driver List].
- target ... names the target used when issuing commands to the NAND controller.
- configparams ... controllers may support, or require, additional parameters. See the controller-specific documentation for more information.
- Command: nand list
- Prints a summary of each device declared using
nand device, numbered from zero. Note that un-probed devices show no details.
- Command: nand probe num
- Probes the specified device to determine key characteristics like its page and block sizes, and how many blocks it has. The num parameter is the value shown by
nand list. You must (successfully) probe a device before you can use it with most other NAND commands.
13.2 Erasing, Reading, Writing to NAND Flash
- Command: nand dump num filename offset length [oob_option]
- Reads binary data from the NAND device and writes it to the file, starting at the specified offset. The num parameter is the value shown by
nand list.
Use a complete path name for filename, so you don’t depend on the directory used to start the OpenOCD server.
The offset and length must be exact multiples of the device’s page size. They describe a data region; the OOB data associated with each such page may also be accessed.
NOTE: At the time this text was written, no error correction was done on the data that’s read, unless raw access was disabled and the underlying NAND controller driver had a
read_page method which handled that error correction.
By default, only page data is saved to the specified file. Use an oob_option parameter to save OOB data:
- no oob_* parameter
Output file holds only page data; OOB is discarded.
oob_raw
Output file interleaves page data and OOB data; the file will be longer than "length" by the size of the spare areas associated with each data page. Note that this kind of "raw" access is different from what’s implied by
nand raw_access, which just controls whether a hardware-aware access method is used.
oob_only
Output file has only raw OOB data, and will be smaller than "length" since it will contain only the spare areas associated with each data page.
- Command: nand erase num [offset length]
- Erases blocks on the specified NAND device, starting at the specified offset and continuing for length bytes. Both of those values must be exact multiples of the device’s block size, and the region they specify must fit entirely in the chip. If those parameters are not specified, the whole NAND chip will be erased. The num parameter is the value shown by
nand list.
NOTE: This command will try to erase bad blocks, when told to do so, which will probably invalidate the manufacturer’s bad block marker. For the remainder of the current server session,
nand info will still report that the block “is†bad.
- Command: nand write num filename offset [option...]
- Writes binary data from the file into the specified NAND device, starting at the specified offset. Those pages should already have been erased; you can’t change zero bits to one bits. written, assuming it doesn’t run past the end of the device. Only full pages are written, and any extra space in the last page will be filled with 0xff bytes. (That includes OOB data, if that’s being written.)
NOTE: At the time this text was written, bad blocks are ignored. That is, this routine will not skip bad blocks, but will instead try to write them. This can cause problems.
Provide at most one option parameter. With some NAND drivers, the meanings of these parameters may change if
nand raw_access was used to disable hardware ECC.
- no oob_* parameter
File has only page data, which is written. If raw acccess is in use, the OOB area will not be written. Otherwise, if the underlying NAND controller driver has a
write_pageroutine, that routine may write the OOB with hardware-computed ECC data.
oob_only
File has only raw OOB data, which is written to the OOB area. Each page’s data area stays untouched. This can be a dangerous option, since it can invalidate the ECC data. You may need to force raw access to use this mode.
oob_raw
File interleaves data and OOB data, both of which are written If raw access is enabled, the data is written first, then the un-altered OOB. Otherwise, if the underlying NAND controller driver has a
write_pageroutine, that routine may modify the OOB before it’s written, to include hardware-computed ECC data.
oob_softecc
File has only page data, which is written. The OOB area is filled with 0xff, except for a standard 1-bit software ECC code stored in conventional locations. You might need to force raw access to use this mode, to prevent the underlying driver from applying hardware ECC.
oob_softecc_kw
File has only page data, which is written. The OOB area is filled with 0xff, except for a 4-bit software ECC specific to the boot ROM in Marvell Kirkwood SoCs. You might need to force raw access to use this mode, to prevent the underlying driver from applying hardware ECC.
- Command: nand verify num filename offset [option...]
- Verify the binary data in the file has been programmed to the specified NAND device, starting at the specified offset. read and compared to the contents of the flash, assuming it doesn’t run past the end of the device. As with
nand write, only full pages are verified, so any extra space in the last page will be filled with 0xff bytes.
The same options accepted by
nand write, and the file will be processed similarly to produce the buffers that can be compared against the contents produced from
nand dump.
NOTE: This will not work when the underlying NAND controller driver’s
write_page routine must update the OOB with a hardward-computed ECC before the data is written. This limitation may be removed in a future release.
13.3 Other NAND commands
- Command: nand check_bad_blocks num [offset length]
- Checks for manufacturer bad block markers on the specified NAND device. If no parameters are provided, checks the whole device; otherwise, starts at the specified offset and continues for length bytes. Both of those values must be exact multiples of the device’s block size, and the region they specify must fit entirely in the chip. The num parameter is the value shown by
nand list.
NOTE: Before using this command you should force raw access with
nand raw_access enable to ensure that the underlying driver will not try to apply hardware ECC.
- Command: nand info num
- The num parameter is the value shown by
nand list. This prints the one-line summary from "nand list", plus for devices which have been probed this also prints any known status for each block.
- Command: nand raw_access num (‘enable’|‘disable’)
- Sets or clears an flag affecting how page I/O is done. The num parameter is the value shown by
nand list.
This flag is cleared (disabled) by default, but changing that value won’t affect all NAND devices. The key factor is whether the underlying driver provides
read_page or
write_page methods. If it doesn’t provide those methods, the setting of this flag is irrelevant; all access is effectively “rawâ€.
When those methods exist, they are normally used when reading data (
nand dump or reading bad block markers) or writing it (
nand write). However, enabling raw access (setting the flag) prevents use of those methods, bypassing hardware ECC logic. This can be a dangerous option, since writing blocks with the wrong ECC data can cause them to be marked as bad.
13.4 NAND Driver List
As noted above, the
nand device command allows driver-specific options and behaviors. Some controllers also activate controller-specific commands.
- NAND Driver: at91sam9
- This driver handles the NAND controllers found on AT91SAM9 family chips from Atmel. It takes two extra parameters: address of the NAND chip; address of the ECC controller.
AT91SAM9 chips support single-bit ECC hardware. The
write_page and
read_page methods are used to utilize the ECC hardware unless they are disabled by using the
nand raw_access command. There are four additional commands that are needed to fully configure the AT91SAM9 NAND controller. Two are optional; most boards use the same wiring for ALE/CLE:
- Command: at91sam9 cle num addr_line
- Configure the address line used for latching commands. The num parameter is the value shown by
nand list.
- Command: at91sam9 ale num addr_line
- Configure the address line used for latching addresses. The num parameter is the value shown by
nand list.
For the next two commands, it is assumed that the pins have already been properly configured for input or output.
- Command: at91sam9 rdy_busy num pio_base_addr pin
- Configure the RDY/nBUSY input from the NAND device. The num parameter is the value shown by
nand list. pio_base_addr is the base address of the PIO controller and pin is the pin number.
- Command: at91sam9 ce num pio_base_addr pin
- Configure the chip enable input to the NAND device. The num parameter is the value shown by
nand list. pio_base_addr is the base address of the PIO controller and pin is the pin number.
- NAND Driver: davinci
- This driver handles the NAND controllers found on DaVinci family chips from Texas Instruments. It takes three extra parameters: address of the NAND chip; hardware ECC mode to use (‘hwecc1’, ‘hwecc4’, ‘hwecc4_infix’); address of the AEMIF controller on this processor.
All DaVinci processors support the single-bit ECC hardware, and newer ones also support the four-bit ECC hardware. The
write_page and
read_page methods are used to implement those ECC modes, unless they are disabled using the
nand raw_access command.
- NAND Driver: lpc3180
- These controllers require an extra
nand deviceparameter: the clock rate used by the controller.
- Command: lpc3180 select num [mlc|slc]
- Configures use of the MLC or SLC controller mode. MLC implies use of hardware ECC. The num parameter is the value shown by
nand list.
At this writing, this driver includes
write_page and
read_page methods. Using
nand raw_access to disable those methods will prevent use of hardware ECC in the MLC controller mode, but won’t change SLC behavior.
- NAND Driver: mx3
- This driver handles the NAND controller in i.MX31. The mxc driver should work for this chip aswell.
- NAND Driver: mxc
- This driver handles the NAND controller found in Freescale i.MX chips. It has support for v1 (i.MX27 and i.MX31) and v2 (i.MX35). The driver takes 3 extra arguments, chip (‘mx27’, ‘mx31’, ‘mx35’), ecc (‘noecc’, ‘hwecc’) and optionally if bad block information should be swapped between main area and spare area (‘biswap’), defaults to off.
- Command: mxc biswap bank_num [enable|disable]
- Turns on/off bad block information swaping from main area, without parameter query status.
- NAND Driver: orion
- These controllers require an extra
nand deviceparameter: the address of the controller.
These controllers don’t define any specialized commands. At this writing, their drivers don’t include
write_page or
read_page methods, so
nand raw_access won’t change any behavior.
- NAND Driver: s3c2410
- NAND Driver: s3c2412
- NAND Driver: s3c2440
- NAND Driver: s3c2443
- NAND Driver: s3c6400
- These S3C family controllers don’t have any special
nand deviceoptions, and don’t define any specialized commands. At this writing, their drivers don’t include
write_pageor
read_pagemethods, so
nand raw_accesswon’t change any behavior.
14. PLD/FPGA Commands.
14.1 PLD/FPGA Configuration and Commands.
- Config Command: pld device driver_name tap_name [driver_options]
- Defines a new PLD device, supported by driver driver_name, using the TAP named tap_name. The driver may make use of any driver_options to configure its behavior.
- Command: pld devices
- Lists the PLDs and their numbers.
- Command: pld load num filename
- Loads the file ‘filename’ into the PLD identified by num. The file format must be inferred by the driver.
14.2 PLD/FPGA Drivers, Options, and Commands
Drivers may support PLD-specific options to the
pld device definition command, and may also define commands usable only with that particular type of PLD.
- FPGA Driver: virtex2
- Virtex-II is a family of FPGAs sold by Xilinx. It supports the IEEE 1532 standard for In-System Configuration (ISC). No driver-specific PLD definition options are used, and one driver-specific command is defined.
- Command: virtex2 read_stat num
- Reads and displays the Virtex-II status register (STAT) for FPGA num.
15. General Commands
The commands documented in this chapter here are common commands that you, as a human, may want to type and see the output of. Configuration type commands are documented elsewhere.
Intent:
- Source Of Commands
OpenOCD commands can occur in a configuration script (discussed elsewhere) or typed manually by a human or supplied programatically, or via one of several TCP/IP Ports.
- From the human
A human should interact with the telnet interface (default port: 4444) or via GDB (default port 3333).
To issue commands from within a GDB session, use the ‘monitor’ command, e.g. use ‘monitor poll’ to issue the ‘poll’ command. All output is relayed through the GDB session.
- Machine Interface The Tcl interface’s intent is to be a machine interface. The default Tcl port is 5555.
15.1 Daemon Commands
- Command: exit
- Exits the current telnet session.
- Command: help [string]
- With no parameters, prints help text for all commands. Otherwise, prints each helptext containing string. Not every command provides helptext.
Configuration commands, and commands valid at any time, are explicitly noted in parenthesis. In most cases, no such restriction is listed; this indicates commands which are only available after the configuration stage has completed.
- Command: sleep msec [‘busy’]
- Wait for at least msec milliseconds before resuming. If ‘busy’ is passed, busy-wait instead of sleeping. (This option is strongly discouraged.) Useful in connection with script files (
scriptcommand and
target_nameconfiguration).
- Command: shutdown
- Close the OpenOCD daemon, disconnecting all clients (GDB, telnet, other).
- Command: debug_level [n]
- Display debug level. If n (from 0..3) is provided, then set it to that level. This affects the kind of messages sent to the server log. Level 0 is error messages only; level 1 adds warnings; level 2 adds informational messages; and level 3 adds debugging messages. The default is level 2, but that can be overridden on the command line along with the location of that log file (which is normally the server’s standard output). See section [#Running Running].
- Command: echo [-n] message
- Logs a message at "user" priority. Output message to stdout. Option "-n" suppresses trailing newline.
- Command: log_output [filename]
- Redirect logging to filename; the initial log output channel is stderr.
- Command: add_script_search_dir [directory]
- Add directory to the file/script search path.
15.2 Target State handling
In this section “target†refers to a CPU configured as shown earlier (see section [#CPU-Configuration CPU Configuration]). These commands, like many, implicitly refer to a current target which is used to perform the various operations. The current target may be changed by using
targets command with the name of the target which should become current.
- Command: reg [(number|name) [value]]
- Access a single register by number or by its name. The target must generally be halted before access to CPU core registers is allowed. Depending on the hardware, some other registers may be accessible while the target is running.
With no arguments: list all available registers for the current target, showing number, name, size, value, and cache status. For valid entries, a value is shown; valid entries which are also dirty (and will be written back later) are flagged as such. With number/name: display that register’s value. With both number/name and value: set register’s value. Writes may be held in a writeback cache internal to OpenOCD, so that setting the value marks the register as dirty instead of immediately flushing that value. Resuming CPU execution (including by single stepping) or otherwise activating the relevant module will flush such values. Cores may have surprisingly many registers in their Debug and trace infrastructure:
- Command: halt [ms]
- Command: wait_halt [ms]
- The
haltcommand first sends a halt request to the target, which
wait_haltdoesn’t. Otherwise these behave the same: wait up to ms milliseconds, or 5 seconds if there is no parameter, for the target to halt (and enter debug mode). Using 0 as the ms parameter prevents OpenOCD from waiting.
Warning: On ARM cores, software using the wait for interrupt operation often blocks the JTAG access needed by a
haltcommand. This is because that operation also puts the core into a low power mode by gating the core clock; but the core clock is needed to detect JTAG clock transitions. One partial workaround uses adaptive clocking: when the core is interrupted the operation completes, then JTAG clocks are accepted at least until the interrupt handler completes. However, this workaround is often unusable since the processor, board, and JTAG adapter must all support adaptive JTAG clocking. Also, it can’t work until an interrupt is issued. A more complete workaround is to not use that operation while you work with a JTAG debugger. Tasking environments generaly have idle loops where the body is the wait for interrupt operation. (On older cores, it is a coprocessor action; newer cores have a ‘wfi’ instruction.) Such loops can just remove that operation, at the cost of higher power consumption (because the CPU is needlessly clocked).
- Command: resume [address]
- Resume the target at its current code position, or the optional address if it is provided. OpenOCD will wait 5 seconds for the target to resume.
- Command: step [address]
- Single-step the target at its current code position, or the optional address if it is provided.
- Command: reset
- Command: reset run
- Command: reset halt
- Command: reset init
- Perform as hard a reset as possible, using SRST if possible. All defined targets will be reset, and target events will fire during the reset sequence.
The optional parameter specifies what should happen after the reset. If there is no parameter, a
reset run is executed. The other options will not work on all systems. See section [#Reset-Configuration Reset Configuration].
- - run Let the target run
- - halt Immediately halt the target
- - init Immediately halt the target, and execute the reset-init script
- Command: soft_reset_halt
- Requesting target halt and executing a soft reset. This is often used when a target cannot be reset and halted. The target, after reset is released begins to execute code. OpenOCD attempts to stop the CPU and then sets the program counter back to the reset vector. Unfortunately the code that was executed may have left the hardware in an unknown state.
15.3 I/O Utilities
These commands are available when OpenOCD is built with ‘--enable-ioutil’. They are mainly useful on embedded targets, notably the ZY1000. Hosts with operating systems have complementary tools.
Note: there are several more such commands.
- Command: append_file filename [string]*
- Appends the string parameters to the text file ‘filename’. Each string except the last one is followed by one space. The last string is followed by a newline.
- Command: cat filename
- Reads and displays the text file ‘filename’.
- Command: cp src_filename dest_filename
- Copies contents from the file ‘src_filename’ into ‘dest_filename’.
- Command: ip
- No description provided.
- Command: ls
- No description provided.
- Command: mac
- No description provided.
- Command: meminfo
- Display available RAM memory on OpenOCD host. Used in OpenOCD regression testing scripts.
- Command: peek
- No description provided.
- Command: poke
- No description provided.
- Command: rm filename
- Unlinks the file ‘filename’.
- Command: trunc filename
- Removes all data in the file ‘filename’.
15.4 Memory access commands
These commands allow accesses of a specific size to the memory system. Often these are used to configure the current target in some special way. For example - one may need to write certain values to the SDRAM controller to enable SDRAM.
- Use the
targets(plural) command to change the current target.
- In system level scripts these commands are deprecated. Please use their TARGET object siblings to avoid making assumptions about what TAP is the current target, or about MMU configuration.
- Command: mdw [phys] addr [count]
- Command: mdh [phys] addr [count]
- Command: mdb [phys] addr [count]
- Display contents of address addr, as 32-bit words (
mdw), 16-bit halfwords (
mdh), or 8-bit bytes (
mdb). When the current target has an MMU which is present and active, addr is interpreted as a virtual address. Otherwise, or if the optional phys flag is specified, addr is interpreted as a physical address. If count is specified, displays that many units. (If you want to manipulate the data instead of displaying it, see the
mem2arrayprimitives.)
- Command: mww [phys] addr word
- Command: mwh [phys] addr halfword
- Command: mwb [phys] addr byte
- Writes the specified word (32 bits), halfword (16 bits), or byte (8-bit) value, at the specified address addr. When the current target has an MMU which is present and active, addr is interpreted as a virtual address. Otherwise, or if the optional phys flag is specified, addr is interpreted as a physical address.
15.5 Image loading commands
- Command: dump_image filename address size
- Dump size bytes of target memory starting at address to the binary file named filename.
- Command: fast_load
- Loads an image stored in memory by
fast_load_imageto the current target. Must be preceeded by fast_load_image.
- Command: fast_load_image filename address [‘bin’|‘ihex’|‘elf’|‘s19’]
- Normally you should be using
load_imageor GDB load. However, for testing purposes or when I/O overhead is significant(OpenOCD running on an embedded host), storing the image in memory and uploading the image to the target can be a way to upload e.g. multiple debug sessions when the binary does not change. Arguments are the same as
load_image, but the image is stored in OpenOCD host memory, i.e. does not affect target. This approach is also useful when profiling target programming performance as I/O and target programming can easily be profiled separately.
- Command: load_image filename address [[‘bin’|‘ihex’|‘elf’|‘s19’] ‘min_addr’ ‘max_length’]
- Load image from file filename to target memory offset by address from its load address. The file format may optionally be specified (‘bin’, ‘ihex’, ‘elf’, or ‘s19’). In addition the following arguments may be specifed: min_addr - ignore data below min_addr (this is w.r.t. to the target’s load address address) max_length - maximum number of bytes to load.
- Command: test_image filename [address [‘bin’|‘ihex’|‘elf’]]
- Displays image section sizes and addresses as if filename were loaded into target memory starting at address (defaults to zero). The file format may optionally be specified (‘bin’, ‘ihex’, or ‘elf’)
- Command: verify_image filename address [‘bin’|‘ihex’|‘elf’]
- Verify filename against target memory starting at address. The file format may optionally be specified (‘bin’, ‘ihex’, or ‘elf’) This will first attempt a comparison using a CRC checksum, if this fails it will try a binary compare.
15.6 Breakpoint and Watchpoint commands
CPUs often make debug modules accessible through JTAG, with hardware support for a handful of code breakpoints and data watchpoints. In addition, CPUs almost always support software breakpoints.
- Command: bp [address len [‘hw’]]
- With no parameters, lists all active breakpoints. Else sets a breakpoint on code execution starting at address for length bytes. This is a software breakpoint, unless ‘hw’ is specified in which case it will be a hardware breakpoint.
(See [#arm9-vector_005fcatch arm9 vector_catch], or see [#xscale-vector_005fcatch xscale vector_catch], for similar mechanisms that do not consume hardware breakpoints.)
- Command: rbp address
- Remove the breakpoint at address.
- Command: rwp address
- Remove data watchpoint on address
- Command: wp [address len [(‘r’|‘w’|‘a’) [value [mask]]]]
- With no parameters, lists all active watchpoints. Else sets a data watchpoint on data from address for length bytes. The watch point is an "access" watchpoint unless the ‘r’ or ‘w’ parameter is provided, defining it as respectively a read or write watchpoint. If a value is provided, that value is used when determining if the watchpoint should trigger. The value may be first be masked using mask to mark “don’t care†fields.
15.7 Misc Commands
- Command: profile seconds filename
- Profiling samples the CPU’s program counter as quickly as possible, which is useful for non-intrusive stochastic profiling. Saves up to 10000 sampines in ‘filename’ using “gmon.out†format.
- Command: version
- Displays a string identifying the version of this OpenOCD server.
- Command: virt2phys virtual_address
- Requests the current target to map the specified virtual_address to its corresponding physical address, and displays the result.
16. Architecture and Core Commands
Most CPUs have specialized JTAG operations to support debugging. OpenOCD packages most such operations in its standard command framework. Some of those operations don’t fit well in that framework, so they are exposed here as architecture or implementation (core) specific commands.
16.1 ARM Hardware Tracing
CPUs based on ARM cores may include standard tracing interfaces, based on an “Embedded Trace Module†(ETM) which sends voluminous address and data bus trace records to a “Trace Portâ€.
- Development-oriented boards will sometimes provide a high speed trace connector for collecting that data, when the particular CPU supports such an interface. (The standard connector is a 38-pin Mictor, with both JTAG and trace port support.) Those trace connectors are supported by higher end JTAG adapters and some logic analyzer modules; frequently those modules can buffer several megabytes of trace data. Configuring an ETM coupled to such an external trace port belongs in the board-specific configuration file.
- If the CPU doesn’t provide an external interface, it probably has an “Embedded Trace Buffer†(ETB) on the chip, which is a dedicated SRAM. 4KBytes is one common ETB size. Configuring an ETM coupled only to an ETB belongs in the CPU-specific (target) configuration file, since it works the same on all boards.
ETM support in OpenOCD doesn’t seem to be widely used yet.
Issues: ETM support may be buggy, and at least some
etm configparameters should be detected by asking the ETM for them.
ETM trigger events could also implement a kind of complex hardware breakpoint, much more powerful than the simple watchpoint hardware exported by EmbeddedICE modules. Such breakpoints can be triggered even when using the dummy trace port driver.
It seems like a GDB hookup should be possible, as well as tracing only during specific states (perhaps handling IRQ 23 or calls foo()).
There should be GUI tools to manipulate saved trace data and help analyse it in conjunction with the source code. It’s unclear how much of a common interface is shared with the current XScale trace support, or should be shared with eventual Nexus-style trace module support.
At this writing (November 2009) only ARM7, ARM9, and ARM11 support for ETM modules is available. The code should be able to work with some newer cores; but not all of them support this original style of JTAG access.
16.1.1 ETM Configuration
ETM setup is coupled with the trace port driver configuration.
- Config Command: etm config target width mode clocking driver
- Declares the ETM associated with target, and associates it with a given trace port driver. See [#Trace-Port-Drivers Trace Port Drivers].
Several of the parameters must reflect the trace port capabilities, which are a function of silicon capabilties (exposed later using
etm info) and of what hardware is connected to that port (such as an external pod, or ETB). The width must be either 4, 8, or 16, except with ETMv3.0 and newer modules which may also support 1, 2, 24, 32, 48, and 64 bit widths. (With those versions,
etm info also shows whether the selected port width and mode are supported.)
The mode must be ‘normal’, ‘multiplexed’, or ‘demultiplexed’. The clocking must be ‘half’ or ‘full’.
Warning: With ETMv3.0 and newer, the bits set with the mode and clocking parameters both control the mode. This modified mode does not map to the values supported by previous ETM modules, so this syntax is subject to change.
Note: You can see the ETM registers using the
regcommand. Not all possible registers are present in every ETM. Most of the registers are write-only, and are used to configure what CPU activities are traced.
- Command: etm info
- Displays information about the current target’s ETM. This includes resource counts from the
ETM_CONFIGregister, as well as silicon capabilities (except on rather old modules). from the
ETM_SYS_CONFIGregister.
- Command: etm status
- Displays status of the current target’s ETM and trace port driver: is the ETM idle, or is it collecting data? Did trace data overflow? Was it triggered?
- Command: etm tracemode [type context_id_bits cycle_accurate branch_output]
- Displays what data that ETM will collect. If arguments are provided, first configures that data. When the configuration changes, tracing is stopped and any buffered trace data is invalidated.
- type ... describing how data accesses are traced, when they pass any ViewData filtering that that was set up. The value is one of ‘none’ (save nothing), ‘data’ (save data), ‘address’ (save addresses), ‘all’ (save data and addresses)
- context_id_bits ... 0, 8, 16, or 32
- cycle_accurate ... ‘enable’ or ‘disable’ cycle-accurate instruction tracing. Before ETMv3, enabling this causes much extra data to be recorded.
- branch_output ... ‘enable’ or ‘disable’. Disable this unless you need to try reconstructing the instruction trace stream without an image of the code.
- Command: etm trigger_debug (‘enable’|‘disable’)
- Displays whether ETM triggering debug entry (like a breakpoint) is enabled or disabled, after optionally modifying that configuration. The default behaviour is ‘disable’. Any change takes effect after the next
etm start.
By using script commands to configure ETM registers, you can make the processor enter debug state automatically when certain conditions, more complex than supported by the breakpoint hardware, happen.
16.1.2 ETM Trace Operation
After setting up the ETM, you can use it to collect data. That data can be exported to files for later analysis. It can also be parsed with OpenOCD, for basic sanity checking.
To configure what is being traced, you will need to write various trace registers using
reg ETM_* commands. For the definitions of these registers, read ARM publication IHI 0014, “Embedded Trace Macrocell, Architecture Specificationâ€. Be aware that most of the relevant registers are write-only, and that ETM resources are limited. There are only a handful of address comparators, data comparators, counters, and so on.
Examples of scenarios you might arrange to trace include:
- Code flow within a function, excluding subroutines it calls. Use address range comparators to enable tracing for instruction access within that function’s body.
- Code flow within a function, including subroutines it calls. Use the sequencer and address comparators to activate tracing on an “entered function†state, then deactivate it by exiting that state when the function’s exit code is invoked.
- Code flow starting at the fifth invocation of a function, combining one of the above models with a counter.
- CPU data accesses to the registers for a particular device, using address range comparators and the ViewData logic.
- Such data accesses only during IRQ handling, combining the above model with sequencer triggers which on entry and exit to the IRQ handler.
- ... more
At this writing, September 2009, there are no Tcl utility procedures to help set up any common tracing scenarios.
- Command: etm analyze
- Reads trace data into memory, if it wasn’t already present. Decodes and prints the data that was collected.
- Command: etm dump filename
- Stores the captured trace data in ‘filename’.
- Command: etm image filename [base_address] [type]
- Opens an image file.
- Command: etm load filename
- Loads captured trace data from ‘filename’.
- Command: etm start
- Starts trace data collection.
- Command: etm stop
- Stops trace data collection.
16.1.3 Trace Port Drivers
To use an ETM trace port it must be associated with a driver.
- Trace Port Driver: dummy
- Use the ‘dummy’ driver if you are configuring an ETM that’s not connected to anything (on-chip ETB or off-chip trace connector). This driver lets OpenOCD talk to the ETM, but it does not expose any trace data collection.
- Config Command: etm_dummy config target
- Associates the ETM for target with a dummy driver.
- Trace Port Driver: etb
- Use the ‘etb’ driver if you are configuring an ETM to use on-chip ETB memory.
- Config Command: etb config target etb_tap
- Associates the ETM for target with the ETB at etb_tap. You can see the ETB registers using the
regcommand.
- Command: etb trigger_percent [percent]
- This displays, or optionally changes, ETB behavior after the ETM’s configured trigger event fires. It controls how much more trace data is saved after the (single) trace trigger becomes active.
- The default corresponds to trace around usage, recording 50 percent data before the event and the rest afterwards.
- The minimum value of percent is 2 percent, recording almost exclusively data before the trigger. Such extreme trace before usage can help figure out what caused that event to happen.
- The maximum value of percent is 100 percent, recording data almost exclusively after the event. This extreme trace after usage might help sort out how the event caused trouble.
- Trace Port Driver: oocd_trace
- This driver isn’t available unless OpenOCD was explicitly configured with the ‘--enable-oocd_trace’ option. You probably don’t want to configure it unless you’ve built the appropriate prototype hardware; it’s proof-of-concept software.
Use the ‘oocd_trace’ driver if you are configuring an ETM that’s connected to an off-chip trace connector.
- Config Command: oocd_trace config target tty
- Associates the ETM for target with a trace driver which collects data through the serial port tty.
- Command: oocd_trace resync
- Re-synchronizes with the capture clock.
- Command: oocd_trace status
- Reports whether the capture clock is locked or not.
16.2 Generic ARM
These commands should be available on all ARM processors. They are available in addition to other core-specific commands that may be available.
- Command: arm core_state [‘arm’|‘thumb’]
- Displays the core_state, optionally changing it to process either ‘arm’ or ‘thumb’ instructions. The target may later be resumed in the currently set core_state. (Processors may also support the Jazelle state, but that is not currently supported in OpenOCD.)
- Command: arm disassemble address [count [‘thumb’]]
- Disassembles count instructions starting at address. If count is not specified, a single instruction is disassembled. If ‘thumb’ is specified, or the low bit of the address is set, Thumb2 (mixed 16/32-bit) instructions are used; else ARM (32-bit) instructions are used. (Processors may also support the Jazelle state, but those instructions are not currently understood by OpenOCD.)
Note that all Thumb instructions are Thumb2 instructions, so older processors (without Thumb2 support) will still see correct disassembly of Thumb code. Also, ThumbEE opcodes are the same as Thumb2, with a handful of exceptions. ThumbEE disassembly currently has no explicit support.
- Command: arm mcr pX op1 CRn CRm op2 value
- Write value to a coprocessor pX register passing parameters CRn, CRm, opcodes opc1 and opc2, and using the MCR instruction. (Parameter sequence matches the ARM instruction, but omits an ARM register.)
- Command: arm mrc pX coproc op1 CRn CRm op2
- Read a coprocessor pX register passing parameters CRn, CRm, opcodes opc1 and opc2, and the MRC instruction. Returns the result so it can be manipulated by Jim scripts. (Parameter sequence matches the ARM instruction, but omits an ARM register.)
- Command: arm reg
- Display a table of all banked core registers, fetching the current value from every core mode if necessary.
- Command: arm semihosting [‘enable’|‘disable’]
- Display status of semihosting, after optionally changing that status.
Semihosting allows for code executing on an ARM target to use the I/O facilities on the host computer i.e. the system where OpenOCD is running. The target application must be linked against a library implementing the ARM semihosting convention that forwards operation requests by using a special SVC instruction that is trapped at the Supervisor Call vector by OpenOCD.
16.3 ARMv4 and ARMv5 Architecture
The ARMv4 and ARMv5 architectures are widely used in embedded systems, and introduced core parts of the instruction set in use today. That includes the Thumb instruction set, introduced in the ARMv4T variant.
16.3.1 ARM7 and ARM9 specific commands
These commands are specific to ARM7 and ARM9 cores, like ARM7TDMI, ARM720T, ARM9TDMI, ARM920T or ARM926EJ-S. They are available in addition to the ARM commands, and any other core-specific commands that may be available.
- Command: arm7_9 dbgrq [‘enable’|‘disable’]
- Displays the value of the flag controlling use of the the EmbeddedIce DBGRQ signal to force entry into debug mode, instead of breakpoints. If a boolean parameter is provided, first assigns that flag.
This should be safe for all but ARM7TDMI-S cores (like NXP LPC). This feature is enabled by default on most ARM9 cores, including ARM9TDMI, ARM920T, and ARM926EJ-S.
- Command: arm7_9 dcc_downloads [‘enable’|‘disable’]
- Displays the value of the flag controlling use of the debug communications channel (DCC) to write larger (>128 byte) amounts of memory. If a boolean parameter is provided, first assigns that flag.
DCC downloads offer a huge speed increase, but might be unsafe, especially with targets running at very low speeds. This command was introduced with OpenOCD rev. 60, and requires a few bytes of working area.
- Command: arm7_9 fast_memory_access [‘enable’|‘disable’]
- Displays the value of the flag controlling use of memory writes and reads that don’t check completion of the operation. If a boolean parameter is provided, first assigns that flag.
This provides a huge speed increase, especially with USB JTAG cables (FT2232), but might be unsafe if used with targets running at very low speeds, like the 32kHz startup clock of an AT91RM9200.
16.3.2 ARM720T specific commands
These commands are available to ARM720T based CPUs, which are implementations of the ARMv4T architecture based on the ARM7TDMI-S integer core. They are available in addition to the ARM and ARM7/ARM9 commands.
- Command: arm720t cp15 opcode [value]
- DEPRECATED – avoid using this. Use the
arm mrcor
arm mcrcommands instead.
Display cp15 register returned by the ARM instruction opcode; else if a value is provided, that value is written to that register. The opcode should be the value of either an MRC or MCR instruction.
16.3.3 ARM9 specific commands
ARM9-family cores are built around ARM9TDMI or ARM9E (including ARM9EJS) integer processors. Such cores include the ARM920T, ARM926EJ-S, and ARM966.
- Command: arm9 vector_catch [‘all’|‘none’|list]
- Vector Catch hardware provides a sort of dedicated breakpoint for hardware events such as reset, interrupt, and abort. You can use this to conserve normal breakpoint resources, so long as you’re not concerned with code that branches directly to those hardware vectors.
This always finishes by listing the current configuration. If parameters are provided, it first reconfigures the vector catch hardware to intercept ‘all’ of the hardware vectors, ‘none’ of them, or a list with one or more of the following: ‘reset’ ‘undef’ ‘swi’ ‘pabt’ ‘dabt’ ‘irq’ ‘fiq’.
16.3.4 ARM920T specific commands
These commands are available to ARM920T based CPUs, which are implementations of the ARMv4T architecture built using the ARM9TDMI integer core. They are available in addition to the ARM, ARM7/ARM9, and ARM9 commands.
- Command: arm920t cache_info
- Print information about the caches found. This allows to see whether your target is an ARM920T (2x16kByte cache) or ARM922T (2x8kByte cache).
- Command: arm920t cp15 regnum [value]
- Display cp15 register regnum; else if a value is provided, that value is written to that register. This uses "physical access" and the register number is as shown in bits 38..33 of table 9-9 in the ARM920T TRM. (Not all registers can be written.)
- Command: arm920t cp15i opcode [value [address]]
- DEPRECATED – avoid using this. Use the
arm mrcor
arm mcrcommands instead.
Interpreted access using ARM instruction opcode, which should be the value of either an MRC or MCR instruction (as shown tables 9-11, 9-12, and 9-13 in the ARM920T TRM). If no value is provided, the result is displayed. Else if that value is written using the specified address, or using zero if no other address is provided.
- Command: arm920t read_cache filename
- Dump the content of ICache and DCache to a file named ‘filename’.
- Command: arm920t read_mmu filename
- Dump the content of the ITLB and DTLB to a file named ‘filename’.
16.3.5 ARM926ej-s specific commands
These commands are available to ARM926ej-s based CPUs, which are implementations of the ARMv5TEJ architecture based on the ARM9EJ-S integer core. They are available in addition to the ARM, ARM7/ARM9, and ARM9 commands.
The Feroceon cores also support these commands, although they are not built from ARM926ej-s designs.
- Command: arm926ejs cache_info
- Print information about the caches found.
16.3.6 ARM966E specific commands
These commands are available to ARM966 based CPUs, which are implementations of the ARMv5TE architecture. They are available in addition to the ARM, ARM7/ARM9, and ARM9 commands.
- Command: arm966e cp15 regnum [value]
- Display cp15 register regnum; else if a value is provided, that value is written to that register. The six bit regnum values are bits 37..32 from table 7-2 of the ARM966E-S TRM. There is no current control over bits 31..30 from that table, as required for BIST support.
16.3.7 XScale specific commands
Some notes about the debug implementation on the XScale CPUs:
The XScale CPU provides a special debug-only mini-instruction cache (mini-IC) in which exception vectors and target-resident debug handler code are placed by OpenOCD. In order to get access to the CPU, OpenOCD must point vector 0 (the reset vector) to the entry of the debug handler. However, this means that the complete first cacheline in the mini-IC is marked valid, which makes the CPU fetch all exception handlers from the mini-IC, ignoring the code in RAM.
To address this situation, OpenOCD provides the
xscale vector_table command, which allows the user to explicity write individual entries to either the high or low vector table stored in the mini-IC.
It is recommended to place a pc-relative indirect branch in the vector table, and put the branch destination somewhere in memory. Doing so makes sure the code in the vector table stays constant regardless of code layout in memory:
Alternatively, you may choose to keep some or all of the mini-IC vector table entries synced with those written to memory by your system software. The mini-IC can not be modified while the processor is executing, but for each vector table entry not previously defined using the
xscale vector_table command, OpenOCD will copy the value from memory to the mini-IC every time execution resumes from a halt. This is done for both high and low vector tables (although the table not in use may not be mapped to valid memory, and in this case that copy operation will silently fail). This means that you will need to briefly halt execution at some strategic point during system start-up; e.g., after the software has initialized the vector table, but before exceptions are enabled. A breakpoint can be used to accomplish this once the appropriate location in the start-up code has been identified. A watchpoint over the vector table region is helpful in finding the location if you’re not sure. Note that the same situation exists any time the vector table is modified by the system software.
The debug handler must be placed somewhere in the address space using the
xscale debug_handler command. The allowed locations for the debug handler are either (0x800 - 0x1fef800) or (0xfe000800 - 0xfffff800). The default value is 0xfe000800.
XScale has resources to support two hardware breakpoints and two watchpoints. However, the following restrictions on watchpoint functionality apply: (1) the value and mask arguments to the
wp command are not supported, (2) the watchpoint length must be a power of two and not less than four, and can not be greater than the watchpoint address, and (3) a watchpoint with a length greater than four consumes all the watchpoint hardware resources. This means that at any one time, you can have enabled either two watchpoints with a length of four, or one watchpoint with a length greater than four.
These commands are available to XScale based CPUs, which are implementations of the ARMv5TE architecture.
- Command: xscale analyze_trace
- Displays the contents of the trace buffer.
- Command: xscale cache_clean_address address
- Changes the address used when cleaning the data cache.
- Command: xscale cache_info
- Displays information about the CPU caches.
- Command: xscale cp15 regnum [value]
- Display cp15 register regnum; else if a value is provided, that value is written to that register.
- Command: xscale debug_handler target address
- Changes the address used for the specified target’s debug handler.
- Command: xscale dcache [‘enable’|‘disable’]
- Enables or disable the CPU’s data cache.
- Command: xscale dump_trace filename
- Dumps the raw contents of the trace buffer to ‘filename’.
- Command: xscale icache [‘enable’|‘disable’]
- Enables or disable the CPU’s instruction cache.
- Command: xscale mmu [‘enable’|‘disable’]
- Enables or disable the CPU’s memory management unit.
- Command: xscale trace_buffer [‘enable’|‘disable’ [‘fill’ [n] | ‘wrap’]]
- Displays the trace buffer status, after optionally enabling or disabling the trace buffer and modifying how it is emptied.
- Command: xscale trace_image filename [offset [type]]
- Opens a trace image from ‘filename’, optionally rebasing its segment addresses by offset. The image type may be one.
- Command: xscale vector_catch [mask]
- Display a bitmask showing the hardware vectors to catch. If the optional parameter is provided, first set the bitmask to that value.
The mask bits correspond with bit 16..23 in the DCSR:
- Command: xscale vector_table [(‘low’|‘high’) index value]
- Set an entry in the mini-IC vector table. There are two tables: one for low vectors (at 0x00000000), and one for high vectors (0xFFFF0000), each holding the 8 exception vectors. index can be 1-7, because vector 0 points to the debug handler entry and can not be overwritten. value holds the 32-bit opcode that is placed in the mini-IC.
Without arguments, the current settings are displayed.
16.4 ARMv6 Architecture
16.4.1 ARM11 specific commands
- Command: arm11 memwrite burst [‘enable’|‘disable’]
- Displays the value of the memwrite burst-enable flag, which is enabled by default. If a boolean parameter is provided, first assigns that flag. Burst writes are only used for memory writes larger than 1 word. They improve performance by assuming that the CPU has read each data word over JTAG and completed its write before the next word arrives, instead of polling for a status flag to verify that completion. This is usually safe, because JTAG runs much slower than the CPU.
- Command: arm11 memwrite error_fatal [‘enable’|‘disable’]
- Displays the value of the memwrite error_fatal flag, which is enabled by default. If a boolean parameter is provided, first assigns that flag. When set, certain memory write errors cause earlier transfer termination.
- Command: arm11 step_irq_enable [‘enable’|‘disable’]
- Displays the value of the flag controlling whether IRQs are enabled during single stepping; they are disabled by default. If a boolean parameter is provided, first assigns that.
- Command: arm11 vcr [value]
- Displays the value of the Vector Catch Register (VCR), coprocessor 14 register 7. If value is defined, first assigns that.
Vector Catch hardware provides dedicated breakpoints for certain hardware events. The specific bit values are core-specific (as in fact is using coprocessor 14 register 7 itself) but all current ARM11 cores except the ARM1176 use the same six bits.
16.5 ARMv7 Architecture
16.5.1 ARMv7 Debug Access Port (DAP) specific commands
These commands are specific to ARM architecture v7 Debug Access Port (DAP), included on Cortex-M3 and Cortex-A8 systems. They are available in addition to other core-specific commands that may be available.
- Command: dap apid [num]
- Displays ID register from AP num, defaulting to the currently selected AP.
- Command: dap apsel [num]
- Select AP num, defaulting to 0.
- Command: dap baseaddr [num]
- Displays debug base address from MEM-AP num, defaulting to the currently selected AP.
- Command: dap info [num]
- Displays the ROM table for MEM-AP num, defaulting to the currently selected AP.
- Command: dap memaccess [value]
- Displays the number of extra tck cycles in the JTAG idle to use for MEM-AP memory bus access [0-255], giving additional time to respond to reads. If value is defined, first assigns that.
16.5.2 Cortex-M3 specific commands
- Command: cortex_m3 maskisr (‘auto’|‘on’|‘off’)
- Control masking (disabling) interrupts during target step/resume.
The ‘auto’ option handles interrupts during stepping a way they get served but don’t disturb the program flow. The step command first allows pending interrupt handlers to execute, then disables interrupts and steps over the next instruction where the core was halted. After the step interrupts are enabled again. If the interrupt handlers don’t complete within 500ms, the step command leaves with the core running. Note that a free breakpoint is required for the ‘auto’ option. If no breakpoint is available at the time of the step, then the step is taken with interrupts enabled, i.e. the same way the ‘off’ option does. Default is ‘auto’.
- Command: cortex_m3 vector_catch [‘all’|‘none’|list]
- Vector Catch hardware provides dedicated breakpoints for certain hardware events.
Parameters request interception of ‘all’ of these hardware event vectors, ‘none’ of them, or one or more of the following: ‘hard_err’ for a HardFault exception; ‘mm_err’ for a MemManage exception; ‘bus_err’ for a BusFault exception; ‘irq_err’, ‘state_err’, ‘chk_err’, or ‘nocp_err’ for various UsageFault exceptions; or ‘reset’. If NVIC setup code does not enable them, MemManage, BusFault, and UsageFault exceptions are mapped to HardFault. UsageFault checks for divide-by-zero and unaligned access must also be explicitly enabled. This finishes by listing the current vector catch configuration.
- Command: cortex_m3 reset_config (‘srst’|‘sysresetreq’|‘vectreset’)
- Control reset handling. The default ‘srst’ is to use srst if fitted, otherwise fallback to ‘vectreset’.
- - ‘srst’ use hardware srst if fitted otherwise fallback to ‘vectreset’.
- - ‘sysresetreq’ use NVIC SYSRESETREQ to reset system.
- - ‘vectreset’ use NVIC VECTRESET to reset system.
Using ‘vectreset’ is a safe option for all current Cortex-M3 cores. This however has the disadvantage of only resetting the core, all peripherals are uneffected. A solution would be to use a
reset-init event handler to manually reset the peripherals. See [#Target-Events Target Events].
16.6 Software Debug Messages and Tracing
OpenOCD can process certain requests from target software, when the target uses appropriate libraries. The most powerful mechanism is semihosting, but there is also a lighter weight mechanism using only the DCC channel.
Currently
target_request debugmsgs is supported only for ‘arm7_9’ and ‘cortex_m3’ cores. These messages are received as part of target polling, so you need to have
poll on active to receive them. They are intrusive in that they will affect program execution times. If that is a problem, see [#ARM-Hardware-Tracing ARM Hardware Tracing].
See ‘libdcc’ in the contrib dir for more details. In addition to sending strings, characters, and arrays of various size integers from the target, ‘libdcc’ also exports a software trace point mechanism. The target being debugged may issue trace messages which include a 24-bit trace point number. Trace point support includes two distinct mechanisms, each supported by a command:
- History ... A circular buffer of trace points can be set up, and then displayed at any time. This tracks where code has been, which can be invaluable in finding out how some fault was triggered.
The buffer may overflow, since it collects records continuously. It may be useful to use some of the 24 bits to represent a particular event, and other bits to hold data.
- Counting ... An array of counters can be set up, and then displayed at any time. This can help establish code coverage and identify hot spots.
The array of counters is directly indexed by the trace point number, so trace points with higher numbers are not counted.
Linux-ARM kernels have a “Kernel low-level debugging via EmbeddedICE DCC channel†option (CONFIG_DEBUG_ICEDCC, depends on CONFIG_DEBUG_LL) which uses this mechanism to deliver messages before a serial console can be activated. This is not the same format used by ‘libdcc’. Other software, such as the U-Boot boot loader, sometimes does the same thing.
- Command: target_request debugmsgs [‘enable’|‘disable’|‘charmsg’]
- Displays current handling of target DCC message requests. These messages may be sent to the debugger while the target is running. The optional ‘enable’ and ‘charmsg’ parameters both enable the messages, while ‘disable’ disables them.
With ‘charmsg’ the DCC words each contain one character, as used by Linux with CONFIG_DEBUG_ICEDCC; otherwise the libdcc format is used.
- Command: trace history [‘clear’|count]
- With no parameter, displays all the trace points that have triggered in the order they triggered. With the parameter ‘clear’, erases all current trace history records. With a count parameter, allocates space for that many history records.
- Command: trace point [‘clear’|identifier]
- With no parameter, displays all trace point identifiers and how many times they have been triggered. With the parameter ‘clear’, erases all current trace point counters. With a numeric identifier parameter, creates a new a trace point counter and associates it with that identifier.
Important: The identifier and the trace point number are not related except by this command. These trace point numbers always start at zero (from server startup, or after
trace point clear) and count up from there.
17. JTAG Commands
Most general purpose JTAG commands have been presented earlier. (See [#JTAG-Speed JTAG Speed], [#Reset-Configuration Reset Configuration], and [#TAP-Declaration TAP Declaration].) Lower level JTAG commands, as presented here, may be needed to work with targets which require special attention during operations such as reset or initialization.
To use these commands you will need to understand some of the basics of JTAG, including:
- A JTAG scan chain consists of a sequence of individual TAP devices such as a CPUs.
- Control operations involve moving each TAP through the same standard state machine (in parallel) using their shared TMS and clock signals.
- Data transfer involves shifting data through the chain of instruction or data registers of each TAP, writing new register values while the reading previous ones.
- Data register sizes are a function of the instruction active in a given TAP, while instruction register sizes are fixed for each TAP. All TAPs support a BYPASS instruction with a single bit data register.
- The way OpenOCD differentiates between TAP devices is by shifting different instructions into (and out of) their instruction registers.
17.1 Low Level JTAG Commands
These commands are used by developers who need to access JTAG instruction or data registers, possibly controlling the order of TAP state transitions. If you’re not debugging OpenOCD internals, or bringing up a new JTAG adapter or a new type of TAP device (like a CPU or JTAG router), you probably won’t need to use these commands. In a debug session that doesn’t use JTAG for its transport protocol, these commands are not available.
- Command: drscan tap [numbits value] [‘-endstate’ tap_state]
-).
- Command: flush_count
-.
- Command: irscan [tap instruction] [‘-endstate’ tap_state]
- For each tap listed, loads the instruction register with its associated numeric instruction. (The number of bits in that instruction may be displayed using the
scan_chaincommand.).
- Command: jtag_reset trst srst
- Set values of reset signals. The trst and srst parameter values may be ‘0’, indicating that reset is inactive (pulled or driven high), or ‘1’, indicating it is active (pulled or driven low). The
reset_configcommand should already have been used to configure how the board and JTAG adapter treat these two signals, and to say if either signal is even present. See section [#Reset-Configuration Reset Configuration].
Note that TRST is specially handled. It actually signifies JTAG’s RESET state. So if the board doesn’t support the optional TRST signal, or it doesn’t events are delivered to all TAPs with handlers for that event.
- Command: pathmove start_state [next_state ...]
-.
- Command: runtest num_cycles
- Move to the RUN/IDLE state, and execute at least num_cycles of the JTAG clock (TCK). Instructions often need some time to execute before they take effect.
- Command: verify_ircapture (‘enable’|‘disable’)
- Verify values captured during IRCAPTURE and returned during IR scans. Default is enabled, but this can be overridden by
verify_jtag. This flag is ignored when validating JTAG chain configuration.
- Command: verify_jtag (‘enable’|‘disable’)
- Enables verification of DR and IR scans, to help detect programming errors. For IR scans,
verify_ircapturemust also be enabled. Default is enabled.
17.2 TAP state names
The tap_state names used by OpenOCD in the
drscan,
irscan, and
pathmove commands are the same as those used in SVF boundary scan documents, except that SVF uses IDLE instead of RUN/IDLE.
- RESET ... stable (with TMS high); acts as if TRST were pulsed
- RUN/IDLE ... stable; don’t assume this always means IDLE
- DRSELECT
- DRCAPTURE
- DRSHIFT ... stable; TDI/TDO shifting through the data register
- DREXIT1
- DRPAUSE ... stable; data register ready for update or more shifting
- DREXIT2
- DRUPDATE
- IRSELECT
- IRCAPTURE
- IRSHIFT ... stable; TDI/TDO shifting through the instruction register
- IREXIT1
- IRPAUSE ... stable; instruction register ready for update or more shifting
- IREXIT2
- IRUPDATE
Note that only six of those states are fully “stable†in the face of TMS fixed (low except for RESET) and a free-running JTAG clock. For all the others, the next TCK transition changes to a new state.
- From DRSHIFT and IRSHIFT, clock transitions will produce side effects by changing register contents. The values to be latched in upcoming DRUPDATE or IRUPDATE states may not be as expected.
- RUN/IDLE, DRPAUSE, and IRPAUSE are reasonable choices after
drscanor
irscancommands, since they are free of JTAG side effects.
- RUN/IDLE may have side effects that appear at non-JTAG levels, such as advancing the ARM9E-S instruction pipeline. Consult the documentation for the TAP(s) you are working with.
18. Boundary Scan Commands
One of the original purposes of JTAG was to support boundary scan based hardware testing. Although its primary focus is to support On-Chip Debugging, OpenOCD also includes some boundary scan commands.
18.1 SVF: Serial Vector Format
The Serial Vector Format, better known as SVF, is a way to represent JTAG test patterns in text files. In a debug session using JTAG for its transport protocol, OpenOCD supports running such test files.
- Command: svf filename [‘quiet’]
- This issues a JTAG reset (Test-Logic-Reset) and then runs the SVF script from ‘filename’. Unless the ‘quiet’ option is specified, each command is logged before it is executed.
18.2 XSVF: Xilinx Serial Vector Format
The Xilinx Serial Vector Format, better known as XSVF, is a binary representation of SVF which is optimized for use with Xilinx devices. In a debug session using JTAG for its transport protocol, OpenOCD supports running such test files.
Important: Not all XSVF commands are supported.
- Command: xsvf (tapname|‘plain’) filename [‘virt2’] [‘quiet’]
- This issues a JTAG reset (Test-Logic-Reset) and then runs the XSVF script from ‘filename’. When a tapname is specified, the commands are directed at that TAP. When ‘virt2’ is specified, the XRUNTEST command counts are interpreted as TCK cycles instead of microseconds. Unless the ‘quiet’ option is specified, messages are logged for comments and some retries.
The OpenOCD sources also include two utility scripts for working with XSVF; they are not currently installed after building the software. You may find them useful:
- svf2xsvf ... converts SVF files into the extended XSVF syntax understood by the
xsvfcommand; see notes below.
- xsvfdump ... converts XSVF files into a text output format; understands the OpenOCD extensions.
The input format accepts a handful of non-standard extensions. These include three opcodes corresponding to SVF extensions from Lattice Semiconductor (LCOUNT, LDELAY, LDSR), and two opcodes supporting a more accurate translation of SVF (XTRST, XWAITSTATE). If xsvfdump shows a file is using those opcodes, it probably will not be usable with other XSVF tools.
19. TFTP
If OpenOCD runs on an embedded host(as ZY1000 does), then TFTP can be used to access files on PCs (either the developer’s PC or some other PC).
The way this works on the ZY1000 is to prefix a filename by "/tftp/ip/" and append the TFTP path on the TFTP server (tftpd). For example,
will load c:\temp\abc.elf from the developer pc (10.0.0.96) into memory as if the file was hosted on the embedded host.
In order to achieve decent performance, you must choose a TFTP server that supports a packet size bigger than the default packet size (512 bytes). There are numerous TFTP servers out there (free and commercial) and you will have to do a bit of googling to find something that fits your requirements.
20. GDB and OpenOCD
OpenOCD complies with the remote gdbserver protocol, and as such can be used to debug remote targets. Setting up GDB to work with OpenOCD can involve several components:
- The OpenOCD server support for GDB may need to be configured. See [#GDB-Configuration GDB Configuration].
- GDB’s support for OpenOCD may need configuration, as shown in this chapter.
- If you have a GUI environment like Eclipse, that also will probably need to be configured.
Of course, the version of GDB you use will need to be one which has been built to know about the target CPU you’re using. It’s probably part of the tool chain you’re using. For example, if you are doing cross-development for ARM on an x86 PC, instead of using the native x86
gdb command you might use
arm-none-eabi-gdb if that’s the tool chain used to compile your code.
20.1 Connecting to GDB
Use GDB 6.7 or newer with OpenOCD if you run into trouble. For instance GDB 6.3 has a known bug that produces bogus memory access errors, which has since been fixed; see
OpenOCD can communicate with GDB in two ways:
- A socket (TCP/IP) connection is typically started as follows:
This would cause GDB to connect to the gdbserver on the local pc using port 3333.
- A pipe connection is typically started as follows:
This would cause GDB to run OpenOCD and communicate using pipes (stdin/stdout). Using this method has the advantage of GDB starting/stopping OpenOCD for the debug session. log_output sends the log output to a file to ensure that the pipe is not saturated when using higher debug level outputs.
To list the available OpenOCD commands type
monitor help on the GDB command line.
20.2 Sample GDB session startup
With the remote protocol, GDB sessions start a little differently than they do when you’re debugging locally. Here’s an examples showing how to start a debug session with a small ARM program. In this case the program was linked to be loaded into SRAM on a Cortex-M3. Most programs would be written into flash (address 0) and run from there.
You could then interrupt the GDB session to make the program break, type
where to show the stack,
list to show the code around the program counter,
step through code, set breakpoints or watchpoints, and so on.
20.3 Configuring GDB for OpenOCD
OpenOCD supports the gdb ‘qSupported’ packet, this enables information to be sent by the GDB remote server (i.e. OpenOCD) to GDB. Typical information includes packet size and the device’s memory map. You do not need to configure the packet size by hand, and the relevant parts of the memory map should be automatically set up when you declare (NOR) flash banks.
However, there are other things which GDB can’t currently query. You may need to set those up by hand. As OpenOCD starts up, you will often see a line reporting something like:
You can pass that information to GDB with these commands:
With that particular hardware (Cortex-M3) the hardware breakpoints only work for code running from flash memory. Most other ARM systems do not have such restrictions.
Another example of useful GDB configuration came from a user who found that single stepping his Cortex-M3 didn’t work well with IRQs and an RTOS until he told GDB to disable the IRQs while stepping:
Rather than typing such commands interactively, you may prefer to save them in a file and have GDB execute them as it starts, perhaps using a ‘.gdbinit’ in your project directory or starting GDB using
gdb -x filename.
20.4 Programming using GDB
By default the target memory map is sent to GDB. This can be disabled by the following OpenOCD configuration option:
For this to function correctly a valid flash configuration must also be set in OpenOCD. For faster performance you should also configure a valid working area.
Informing GDB of the memory map of the target will enable GDB to protect any flash areas of the target and use hardware breakpoints by default. This means that the OpenOCD option
gdb_breakpoint_override is not required when using a memory map. See [#gdb_005fbreakpoint_005foverride gdb_breakpoint_override].
To view the configured memory map in GDB, use the GDB command ‘info mem’ All other unassigned addresses within GDB are treated as RAM.
GDB 6.8 and higher set any memory area not in the memory map as inaccessible. This can be changed to the old behaviour by using the following GDB command
If
gdb_flash_program enable is also used, GDB will be able to program any flash memory using the vFlash interface.
GDB will look at the target memory map when a load command is given, if any areas to be programmed lie within the target flash area the vFlash packets will be used.
If the target needs configuring before GDB programming, an event script can be executed:
To verify any flash programming the GDB command ‘compare-sections’ can be used.
20.5 Using openocd SMP with GDB
For SMP support following GDB serial protocol packet have been defined :
- j - smp status request
- J - smp set request
OpenOCD implements :
- ‘jc’ packet for reading core id displayed by GDB connection. Reply is ‘XXXXXXXX’ (8 hex digits giving core id) or ‘E01’ for target not smp.
- ‘JcXXXXXXXX’ (8 hex digits) packet for setting core id displayed at next GDB continue (core id -1 is reserved for returning to normal resume mode). Reply ‘E01’ for target not smp or ‘OK’ on success.
Handling of this packet within GDB can be done :
- by the creation of an internal variable (i.e ‘_core’) by mean of function allocate_computed_value allowing following GDB command.
- by the usage of GDB maintenance command as described in following example (2 cpus in SMP with core id 0 and 1 see [#Define-CPU-targets-working-in-SMP Define CPU targets working in SMP]).
21. Tcl Scripting API
21.1 API rules
The commands are stateless. E.g. the telnet command line has a concept of currently active target, the Tcl API proc’s take this sort of state information as an argument to each proc.
There are three main types of return values: single value, name value pair list and lists.
Name value pair. The proc ’foo’ below returns a name/value pair list.
> set foo(me) Duane > set foo(you) Oyvind > set foo(mouse) Micky > set foo(duck) Donald If one does this: > set foo The result is: me Duane you Oyvind mouse Micky duck Donald Thus, to get the names of the associative array is easy: foreach { name value } [set foo] { puts "Name: $name, Value: $value" }
Lists returned must be relatively small. Otherwise a range should be passed in to the proc in question.
21.2 Internal low-level Commands
By low-level, the intent is a human would not directly use these commands.
Low-level commands are (should be) prefixed with "ocd_", e.g.
ocd_flash_banks is the low level API upon which
flash banks is implemented.
- mem2array <varname> <width> <addr> <nelems>
Read memory and return as a Tcl array for script processing
- array2mem <varname> <width> <addr> <nelems>
Convert a Tcl array to memory locations and write the values
- ocd_flash_banks <driver> <base> <size> <chip_width> <bus_width> <target> [‘driver options’ ...]
Return information about the flash banks
OpenOCD commands can consist of two words, e.g. "flash banks". The ‘startup.tcl’ "unknown" proc will translate this into a Tcl proc called "flash_banks".
21.3 OpenOCD specific Global Variables
Real Tcl has ::tcl_platform(), and platform::identify, and many other variables. JimTCL, as implemented in OpenOCD creates $ocd_HOSTOS which holds one of the following values:
- cygwin Running under Cygwin
- darwin Darwin (Mac-OS) is the underlying operating sytem.
- freebsd Running under FreeBSD
- linux Linux is the underlying operating sytem
- mingw32 Running under MingW32
- winxx Built using Microsoft Visual Studio
- other Unknown, none of the above.
Note: ’winxx’ was choosen because today (March-2009) no distinction is made between Win32 and Win64.
Note: We should add support for a variable like Tcl variable
tcl_platform(platform), it should be called
jim_platform(because it is jim, not real tcl).
22. FAQ
- RTCK, also known as: Adaptive Clocking - What is it?
In digital circuit design it is often refered to as “clock synchronisation†the JTAG interface uses one clock (TCK or TCLK) operating at some speed, your CPU target is operating at another. The two clocks are not synchronised, they are “asynchronous†In order for the two to work together they must be synchronised well enough to work; JTAG can’t go ten times faster than the CPU, for example. There are 2 basic options:
- Use a special "adaptive clocking" circuit to change the JTAG clock rate to match what the CPU currently supports.
- The JTAG clock must be fixed at some speed that’s enough slower than the CPU clock that all TMS and TDI transitions can be detected.
Does this really matter? For some chips and some situations, this is a non-issue, like a 500MHz ARM926 with a 5 MHz JTAG link; the CPU has no difficulty keeping up with JTAG. Startup sequences are often problematic though, as are other situations where the CPU clock rate changes (perhaps to save power).
For example, Atmel AT91SAM chips start operation from reset with a 32kHz system clock. Boot firmware may activate the main oscillator and PLL before switching to a faster clock (perhaps that 500 MHz ARM926 scenario). If you’re using JTAG to debug that startup sequence, you must slow the JTAG clock to sometimes 1 to 4kHz. After startup completes, JTAG can use a faster clock.
Consider also debugging a 500MHz ARM926 hand held battery powered device that enters a low power “deep sleep†mode, at 32kHz CPU clock, between keystrokes unless it has work to do. When would that 5 MHz JTAG clock be usable?
Solution #1 - A special circuit
In order to make use of this, your CPU, board, and JTAG adapter must all support the RTCK feature. Not all of them support this; keep reading!
The RTCK ("Return TCK") signal in some ARM chips is used to help with this problem. ARM has a good description of the problem described at this link: [checked 28/nov/2008]. Link title: “How does the JTAG synchronisation logic work? / how does adaptive clocking work?â€.
The nice thing about adaptive clocking is that “battery powered hand held device example†- the adaptiveness works perfectly all the time. One can set a break point or halt the system in the deep power down code, slow step out until the system speeds up.
Note that adaptive clocking may also need to work at the board level, when a board-level scan chain has multiple chips. Parallel clock voting schemes are good way to implement this, both within and between chips, and can easily be implemented with a CPLD. It’s not difficult to have logic fan a module’s input TCK signal out to each TAP in the scan chain, and then wait until each TAP’s RTCK comes back with the right polarity before changing the output RTCK signal. Texas Instruments makes some clock voting logic available for free (with no support) in VHDL form; see
Solution #2 - Always works - but may be slower
Often this is a perfectly acceptable solution.
In most simple terms: Often the JTAG clock must be 1/10 to 1/12 of the target clock speed. But what that “magic division†is varies depending on the chips on your board. ARM rule of thumb Most ARM based systems require an 6:1 division; ARM11 cores use an 8:1 division. Xilinx rule of thumb is 1/12 the clock speed.
Note: most full speed FT2232 based JTAG adapters are limited to a maximum of 6MHz. The ones using USB high speed chips (FT2232H) often support faster clock rates (and adaptive clocking).
You can still debug the ’low power’ situations - you just need to either use a fixed and very slow JTAG clock rate ... or else manually adjust the clock speed at every step. (Adjusting is painful and tedious, and is not always practical.)
It is however easy to “code your way around it†- i.e.: Cheat a little, have a special debug mode in your application that does a “high power sleepâ€. If you are careful - 98% of your problems can be debugged this way.
Note that on ARM you may need to avoid using the wait for interrupt operation in your idle loops even if you don’t otherwise change the CPU clock rate. That operation gates the CPU clock, and thus the JTAG clock; which prevents JTAG access. One consequence is not being able to
halt cores which are executing that wait for interrupt operation.
To set the JTAG frequency use the command:
- Win32 Pathnames Why don’t backslashes work in Windows paths?
OpenOCD uses Tcl and a backslash is an escape char. Use { and } around Windows filenames.
- Missing: cygwin1.dll OpenOCD complains about a missing cygwin1.dll.
Make sure you have Cygwin installed, or at least a version of OpenOCD that claims to come with all the necessary DLLs. When using Cygwin, try launching OpenOCD from the Cygwin shell.
- Breakpoint Issue I’m trying to set a breakpoint using GDB (or a frontend like Insight or Eclipse), but OpenOCD complains that "Info: arm7_9_common.c:213 arm7_9_add_breakpoint(): sw breakpoint requested, but software breakpoints not enabled".
GDB issues software breakpoints when a normal breakpoint is requested, or to implement source-line single-stepping. On ARMv4T systems, like ARM7TDMI, ARM720T or ARM920T, software breakpoints consume one of the two available hardware breakpoints.
- LPC2000 Flash When erasing or writing LPC2000 on-chip flash, the operation fails at random.
Make sure the core frequency specified in the ‘flash lpc2000’ line matches the clock at the time you’re programming the flash. If you’ve specified the crystal’s frequency, make sure the PLL is disabled. If you’ve specified the full core speed (e.g. 60MHz), make sure the PLL is enabled.
- Amontec Chameleon When debugging using an Amontec Chameleon in its JTAG Accelerator configuration, I keep getting "Error: amt_jtagaccel.c:184 amt_wait_scan_busy(): amt_jtagaccel timed out while waiting for end of scan, rtck was disabled".
Make sure your PC’s parallel port operates in EPP mode. You might have to try several settings in your PC BIOS (ECP, EPP, and different versions of those).
- Data Aborts When debugging with OpenOCD and GDB (plain GDB, Insight, or Eclipse), I get lots of "Error: arm7_9_common.c:1771 arm7_9_read_memory(): memory read caused data abort".
The errors are non-fatal, and are the result of GDB trying to trace stack frames beyond the last valid frame. It might be possible to prevent this by setting up a proper "initial" stack frame, if you happen to know what exactly has to be done, feel free to add this here. Simple: In your startup code - push 8 registers of zeros onto the stack before calling main(). What GDB is doing is “climbing†the run time stack by reading various values on the stack using the standard call frame for the target. GDB keeps going - until one of 2 things happen #1 an invalid frame is found, or #2 some huge number of stackframes have been processed. By pushing zeros on the stack, GDB gracefully stops. Debugging Interrupt Service Routines - In your ISR before you call your C code, do the same - artifically push some zeros onto the stack, remember to pop them off when the ISR is done. Also note: If you have a multi-threaded operating system, they often do not in the intrest of saving memory waste these few bytes. Painful...
- JTAG Reset Config I get the following message in the OpenOCD console (or log file): "Warning: arm7_9_common.c:679 arm7_9_assert_reset(): srst resets test logic, too".
This warning doesn’t indicate any serious problem, as long as you don’t want to debug your core right out of reset. Your .cfg file specified ‘jtag_reset trst_and_srst srst_pulls_trst’ to tell OpenOCD that either your board, your debugger or your target uC (e.g. LPC2000) can’t assert the two reset signals independently. With this setup, it’s not possible to halt the core right out of reset, everything else should work fine.
- USB Power When using OpenOCD in conjunction with Amontec JTAGkey and the Yagarto toolchain (Eclipse, arm-elf-gcc, arm-elf-gdb), the debugging seems to be unstable. When single-stepping over large blocks of code, GDB and OpenOCD quit with an error message. Is there a stability issue with OpenOCD?
No, this is not a stability issue concerning OpenOCD. Most users have solved this issue by simply using a self-powered USB hub, which they connect their Amontec JTAGkey to. Apparently, some computers do not provide a USB power supply stable enough for the Amontec JTAGkey to be operated. Laptops running on battery have this problem too...
- USB Power When using the Amontec JTAGkey, sometimes OpenOCD crashes with the following error messages: "Error: ft2232.c:201 ft2232_read(): FT_Read returned: 4" and "Error: ft2232.c:365 ft2232_send_and_recv(): couldn’t read from FT2232". What does that mean and what might be the reason for this?
First of all, the reason might be the USB power supply. Try using a self-powered hub instead of a direct connection to your computer. Secondly, the error code 4 corresponds to an FT_IO_ERROR, which means that the driver for the FTDI USB chip ran into some sort of error - this points us to a USB problem.
- GDB Disconnects When using the Amontec JTAGkey, sometimes OpenOCD crashes with the following error message: "Error: gdb_server.c:101 gdb_get_char(): read: 10054". What does that mean and what might be the reason for this?
Error code 10054 corresponds to WSAECONNRESET, which means that the debugger (GDB) has closed the connection to OpenOCD. This might be a GDB issue.
- LPC2000 Flash In the configuration file in the section where flash device configurations are described, there is a parameter for specifying the clock frequency for LPC2000 internal flash devices (e.g. ‘flash bank $_FLASHNAME lpc2000 0x0 0x40000 0 0 $_TARGETNAME lpc2000_v1 14746 calc_checksum’), which must be specified in kilohertz. However, I do have a quartz crystal of a frequency that contains fractions of kilohertz (e.g. 14,745,600 Hz, i.e. 14,745.600 kHz). Is it possible to specify real numbers for the clock frequency?
No. The clock frequency specified here must be given as an integral number. However, this clock frequency is used by the In-Application-Programming (IAP) routines of the LPC2000 family only, which seems to be very tolerant concerning the given clock frequency, so a slight difference between the specified clock frequency and the actual clock frequency will not cause any trouble.
- Command Order Do I have to keep a specific order for the commands in the configuration file?
Well, yes and no. Commands can be given in arbitrary order, yet the devices listed for the JTAG scan chain must be given in the right order (jtag newdevice), with the device closest to the TDO-Pin being listed first. In general, whenever objects of the same type exist which require an index number, then these objects must be given in the right order (jtag newtap, targets and flash banks - a target references a jtag newtap and a flash bank references a target).
You can use the “scan_chain†command to verify and display the tap order.
Also, some commands can’t execute until after
init has been processed. Such commands include
nand probe and everything else that needs to write to controller registers, perhaps for setting up DRAM and loading it with code.
- JTAG TAP Order Do I have to declare the TAPS in some particular order?
Yes; whenever you have more than one, you must declare them in the same order used by the hardware. Many newer devices have multiple JTAG TAPs. For example: ST Microsystems STM32 chips have two TAPs, a “boundary scan TAP†and “Cortex-M3†TAP. Example: The STM32 reference manual, Document ID: RM0008, Section 26.5, Figure 259, page 651/681, the “TDI†pin is connected to the boundary scan TAP, which then connects to the Cortex-M3 TAP, which then connects to the TDO pin. Thus, the proper order for the STM32 chip is: (1) The Cortex-M3, then (2) The boundary scan TAP. If your board includes an additional JTAG chip in the scan chain (for example a Xilinx CPLD or FPGA) you could place it before or after the STM32 chip in the chain. For example:
- OpenOCD_TDI(output) -> STM32 TDI Pin (BS Input)
- STM32 BS TDO (output) -> STM32 Cortex-M3 TDI (input)
- STM32 Cortex-M3 TDO (output) -> SM32 TDO Pin
- STM32 TDO Pin (output) -> Xilinx TDI Pin (input)
- Xilinx TDO Pin -> OpenOCD TDO (input)
The “jtag device†commands would thus be in the order shown below. Note:
- jtag newtap Xilinx tap -irlen ...
- jtag newtap stm32 cpu -irlen ...
- jtag newtap stm32 bs -irlen ...
- # Create the debug target and say where it is
- target create stm32.cpu -chain-position stm32.cpu ...
- SYSCOMP Sometimes my debugging session terminates with an error. When I look into the log file, I can see these error messages: Error: arm7_9_common.c:561 arm7_9_execute_sys_speed(): timeout waiting for SYSCOMP
TODO.
23. Tcl Crash Course
Not everyone knows Tcl - this is not intended to be a replacement for learning Tcl, the intent of this chapter is to give you some idea of how the Tcl scripts work.
This chapter is written with two audiences in mind. (1) OpenOCD users who need to understand a bit more of how Jim-Tcl works so they can do something useful, and (2) those that want to add a new command to OpenOCD.
23.1 Tcl Rule #1
There is a famous joke, it goes like this:
- Rule #1: The wife is always correct
- Rule #2: If you think otherwise, See Rule #1
The Tcl equal is this:
- Rule #1: Everything is a string
- Rule #2: If you think otherwise, See Rule #1
As in the famous joke, the consequences of Rule #1 are profound. Once you understand Rule #1, you will understand Tcl.
23.2 Tcl Rule #1b
There is a second pair of rules.
- Rule #1: Control flow does not exist. Only commands
For example: the classic FOR loop or IF statement is not a control flow item, they are commands, there is no such thing as control flow in Tcl.
- Rule #2: If you think otherwise, See Rule #1
Actually what happens is this: There are commands that by convention, act like control flow key words in other languages. One of those commands is the word “forâ€, another command is “ifâ€.
23.3 Per Rule #1 - All Results are strings
Every Tcl command results in a string. The word “result†is used deliberatly. No result is just an empty string. Remember: Rule #1 - Everything is a string
23.4 Tcl Quoting Operators
In life of a Tcl script, there are two important periods of time, the difference is subtle.
- Parse Time
- Evaluation Time
The two key items here are how “quoted things†work in Tcl. Tcl has three primary quoting constructs, the [square-brackets] the {curly-braces} and “double-quotesâ€
By now you should know $VARIABLES always start with a $DOLLAR sign. BTW: To set a variable, you actually use the command “setâ€, as in “set VARNAME VALUE†much like the ancient BASIC langauge “let x = 1†statement, but without the equal sign.
- [square-brackets]
[square-brackets] are command substitutions. It operates much like Unix Shell ‘back-ticks‘. The result of a [square-bracket] operation is exactly 1 string. Remember Rule #1 - Everything is a string. These two statements are roughly identical:
- “double-quoted-thingsâ€
“double-quoted-things†are just simply quoted text. $VARIABLES and [square-brackets] are expanded in place - the result however is exactly 1 string. Remember Rule #1 - Everything is a string
- {Curly-Braces}
{Curly-Braces} are magic: $VARIABLES and [square-brackets] are parsed, but are NOT expanded or executed. {Curly-Braces} are like ’single-quote’ operators in BASH shell scripts, with the added feature: {curly-braces} can be nested, single quotes can not. {{{this is nested 3 times}}} NOTE: [date] is a bad example; at this writing, Jim/OpenOCD does not have a date command.
23.5 Consequences of Rule 1/2/3/4
The consequences of Rule 1 are profound.
23.5.1 Tokenisation & Execution.
Of course, whitespace, blank lines and #comment lines are handled in the normal way.
As a script is parsed, each (multi) line in the script file is tokenised and according to the quoting rules. After tokenisation, that line is immedatly executed.
Multi line statements end with one or more “still-open†{curly-braces} which - eventually - closes a few lines later.
23.5.2 Command Execution
Remember earlier: There are no “control flow†statements in Tcl. Instead there are COMMANDS that simply act like control flow operators.
Commands are executed like this:
- Parse the next line into (argc) and (argv[]).
- Look up (argv[0]) in a table and call its function.
- Repeat until End Of File.
It sort of works like this:
When the command “proc†is parsed (which creates a procedure function) it gets 3 parameters on the command line. 1 the name of the proc (function), 2 the list of parameters, and 3 the body of the function. Not the choice of words: LIST and BODY. The PROC command stores these items in a table somewhere so it can be found by “LookupCommand()â€
23.5.3 The FOR command
The most interesting command to look at is the FOR command. In Tcl, the FOR command is normally implemented in C. Remember, FOR is a command just like any other command.
When the ascii text containing the FOR command is parsed, the parser produces 5 parameter strings, (If in doubt: Refer to Rule #1) they are:
- The ascii text ’for’
- The start text
- The test expression
- The next text
- The body text
Sort of reminds you of “main( int argc, char **argv )†does it not? Remember Rule #1 - Everything is a string. The key point is this: Often many of those parameters are in {curly-braces} - thus the variables inside are not expanded or replaced until later.
Remember that every Tcl command looks like the classic “main( argc, argv )†function in C. In JimTCL - they actually look like this:
Real Tcl is nearly identical. Although the newer versions have introduced a byte-code parser and intepreter, but at the core, it still operates in the same basic way.
23.5.4 FOR command implementation
To understand Tcl it is perhaps most helpful to see the FOR command. Remember, it is a COMMAND not a control flow structure.
In Tcl there are two underlying C helper functions.
Remember Rule #1 - You are a string.
The first helper parses and executes commands found in an ascii string. Commands can be seperated by semicolons, or newlines. While parsing, variables are expanded via the quoting rules.
The second helper evaluates an ascii string as a numerical expression and returns a value.
Here is an example of how the FOR command could be implemented. The pseudo code below does not show error handling.
Every other command IF, WHILE, FORMAT, PUTS, EXPR, everything works in the same basic way.
23.6 OpenOCD Tcl Usage
23.6.1 source and find commands
Where: In many configuration files
Example: source [find FILENAME]
Remember the parsing rules
- The
findcommand is in square brackets, and is executed with the parameter FILENAME. It should find and return the full path to a file with that name; it uses an internal search path. The RESULT is a string, which is substituted into the command line in place of the bracketed
findcommand. (Don’t try to use a FILENAME which includes the "#" character. That character begins Tcl comments.)
- The
sourcecommand is executed with the resulting filename; it reads a file and executes as a script.
23.6.2 format command
Where: Generally occurs in numerous places.
Tcl has no command like printf(), instead it has format, which is really more like sprintf(). Example
- The SET command creates 2 variables, X and Y.
- The double [nested] EXPR command performs math
The EXPR command produces numerical result as a string.
Refer to Rule #1
- The format command is executed, producing a single string
Refer to Rule #1.
- The PUTS command outputs the text.
23.6.3 Body or Inlined Text
Where: Various TARGET scripts.
- The $_TARGETNAME is an OpenOCD variable convention.
$_TARGETNAME represents the last target created, the value changes each time a new target is created. Remember the parsing rules. When the ascii text is parsed, the $_TARGETNAME becomes a simple string, the name of the target which happens to be a TARGET (object) command.
- The 2nd parameter to the ‘-event’ parameter is a TCBODY
There are 4 examples:
- The TCLBODY is a simple string that happens to be a proc name
- The TCLBODY is several simple commands seperated by semicolons
- The TCLBODY is a multi-line {curly-brace} quoted string
- The TCLBODY is a string with variables that get expanded.
In the end, when the target event FOO occurs the TCLBODY is evaluated. Method #1 and #2 are functionally identical. For Method #3 and #4 it is more interesting. What is the TCLBODY? Remember the parsing rules. In case #3, {curly-braces} mean the $VARS and [square-brackets] are expanded later, when the EVENT occurs, and the text is evaluated. In case #4, they are replaced before the “Target Object Command†is executed. This occurs at the same time $_TARGETNAME is replaced. In case #4 the date will never change. {BTW: [date] is a bad example; at this writing, Jim/OpenOCD does not have a date command}
23.6.4 Global Variables
Where: You might discover this when writing your own procs
In simple terms: Inside a PROC, if you need to access a global variable you must say so. See also “upvarâ€. Example:
23.7 Other Tcl Hacks
Dynamic variable creation
Dynamic proc/command creation
OpenOCD Concept Index
Command and Driver Index
Footnotes
[#DOCF1 (1)]
Note that many systems support a "monitor mode" debug that is a somewhat cleaner way to address such issues. You can think of it as only halting part of the system, maybe just one task, instead of the whole thing. At this writing, January 2010, OpenOCD based debugging does not support monitor mode debug, only "halt mode" debug.
[#DOCF2 (2)]
See chapter 8 "Semihosting" in ARM DUI 0203I, the "RealView Compilation Tools Developer Guide". The CodeSourcery EABI toolchain also includes a semihosting library.
[#DOCF3 (3)]
As a more polite alternative, some processors have special debug-oriented registers which can be used to change various features including how the low power states are clocked while debugging. The STM32 DBGMCU_CR register is an example; at the cost of extra power consumption, JTAG can be used during low power states.
[#DOCF4 (4)]
A FAQ gives details.
[#DOCF5 (5)]
See the ST document titled: STR91xFAxxx, Section 3.15 Jtag Interface, Page: 28/102, Figure 3: JTAG chaining inside the STR91xFA.
[#DOCF6 (6)]
Currently there is a
stellaris mass_erase command. That seems pointless since the same effect can be had using the standard
flash erase_address command.
[#DOCF7 (7)]
Currently there is a
stm32f1x mass_erase command. That seems pointless since the same effect can be had using the standard
flash erase_address command.
Table of Contents
- [#About About]
- [#What-is-OpenOCD_003f 0.1 What is OpenOCD?]
- [#OpenOCD-Web-Site 0.2 OpenOCD Web Site]
- [#Latest-User_0027s-Guide_003a 0.3 Latest User’s Guide:]
- [#OpenOCD-User_0027s-Forum 0.4 OpenOCD User’s Forum]
- [#Developers 1. OpenOCD Developer Resources]
- [#OpenOCD-GIT-Repository 1.1 OpenOCD GIT Repository]
- [#Doxygen-Developer-Manual 1.2 Doxygen Developer Manual]
- [#OpenOCD-Developer-Mailing-List 1.3 OpenOCD Developer Mailing List]
- [#OpenOCD-Bug-Database 1.4 OpenOCD Bug Database]
- [#Debug-Adapter-Hardware 2. Debug Adapter Hardware]
- [#Choosing-a-Dongle 2.1 Choosing a Dongle]
- [#Stand-alone-Systems 2.2 Stand alone Systems]
- [#USB-FT2232-Based 2.3 USB FT2232 Based]
- [#USB_002dJTAG-_002f-Altera-USB_002dBlaster-compatibles 2.4 USB-JTAG / Altera USB-Blaster compatibles]
- [#USB-JLINK-based 2.5 USB JLINK based]
- [#USB-RLINK-based 2.6 USB RLINK based]
- [#USB-ST_002dLINK-based 2.7 USB ST-LINK based]
- [#USB-Other 2.8 USB Other]
- [#IBM-PC-Parallel-Printer-Port-Based 2.9 IBM PC Parallel Printer Port Based]
- [#Other_002e_002e_002e 2.10 Other...]
- [#About-Jim_002dTcl 3. About Jim-Tcl]
- [#Running 4. Running]
- [#Simple-setup_002c-no-customization 4.1 Simple setup, no customization]
- [#What-OpenOCD-does-as-it-starts 4.2 What OpenOCD does as it starts]
- [#OpenOCD-Project-Setup 5. OpenOCD Project Setup]
- [#Hooking-up-the-JTAG-Adapter 5.1 Hooking up the JTAG Adapter]
- [#Project-Directory 5.2 Project Directory]
- [#Configuration-Basics 5.3 Configuration Basics]
- [#User-Config-Files 5.4 User Config Files]
- [#Project_002dSpecific-Utilities 5.5 Project-Specific Utilities]
- [#Target-Software-Changes 5.6 Target Software Changes]
- [#Target-Hardware-Setup 5.7 Target Hardware Setup]
- [#Config-File-Guidelines 6. Config File Guidelines]
- [#Interface-Config-Files 6.1 Interface Config Files]
- [#Board-Config-Files 6.2 Board Config Files]
- [#Communication-Between-Config-files 6.2.1 Communication Between Config files]
- [#Variable-Naming-Convention 6.2.2 Variable Naming Convention]
- [#The-reset_002dinit-Event-Handler 6.2.3 The reset-init Event Handler]
- [#JTAG-Clock-Rate 6.2.4 JTAG Clock Rate]
- [#The-init_005fboard-procedure-1 6.2.5 The init_board procedure]
- [#Target-Config-Files 6.3 Target Config Files]
- [#Default-Value-Boiler-Plate-Code 6.3.1 Default Value Boiler Plate Code]
- [#Adding-TAPs-to-the-Scan-Chain 6.3.2 Adding TAPs to the Scan Chain]
- [#Add-CPU-targets 6.3.3 Add CPU targets]
- [#Define-CPU-targets-working-in-SMP-1 6.3.4 Define CPU targets working in SMP]
- [#Chip-Reset-Setup 6.3.5 Chip Reset Setup]
- [#The-init_005ftargets-procedure-1 6.3.6 The init_targets procedure]
- [#ARM-Core-Specific-Hacks 6.3.7 ARM Core Specific Hacks]
- [#Internal-Flash-Configuration 6.3.8 Internal Flash Configuration]
- [#Translating-Configuration-Files-1 6.4 Translating Configuration Files]
- [#Daemon-Configuration 7. Daemon Configuration]
- [#Configuration-Stage-1 7.1 Configuration Stage]
- [#Entering-the-Run-Stage-1 7.2 Entering the Run Stage]
- [#TCP_002fIP-Ports-1 7.3 TCP/IP Ports]
- [#GDB-Configuration-1 7.4 GDB Configuration]
- [#Event-Polling-1 7.5 Event Polling]
- [#Debug-Adapter-Configuration 8. Debug Adapter Configuration]
- [#Interface-Configuration 8.1 Interface Configuration]
- [#Interface-Drivers 8.2 Interface Drivers]
- [#Transport-Configuration 8.3 Transport Configuration]
- [#JTAG-Transport 8.3.1 JTAG Transport]
- [#SWD-Transport 8.3.2 SWD Transport]
- [#SPI-Transport 8.3.3 SPI Transport]
- [#JTAG-Speed-1 8.4 JTAG Speed]
- [#Reset-Configuration 9. Reset Configuration]
- [#Types-of-Reset 9.1 Types of Reset]
- [#SRST-and-TRST-Issues-1 9.2 SRST and TRST Issues]
- [#Commands-for-Handling-Resets 9.3 Commands for Handling Resets]
- [#Custom-Reset-Handling 9.4 Custom Reset Handling]
- [#TAP-Declaration 10. TAP Declaration]
- [#Scan-Chains 10.1 Scan Chains]
- [#TAP-Names 10.2 TAP Names]
- [#TAP-Declaration-Commands 10.3 TAP Declaration Commands]
- [#Other-TAP-commands 10.4 Other TAP commands]
- [#TAP-Events-1 10.5 TAP Events]
- [#Enabling-and-Disabling-TAPs-1 10.6 Enabling and Disabling TAPs]
- [#Autoprobing-1 10.7 Autoprobing]
- [#CPU-Configuration 11. CPU Configuration]
- [#Target-List 11.1 Target List]
- [#Target-CPU-Types-and-Variants 11.2 Target CPU Types and Variants]
- [#Target-Configuration-1 11.3 Target Configuration]
- [#Other-_0024target_005fname-Commands 11.4 Other $target_name Commands]
- [#Target-Events-1 11.5 Target Events]
- [#Flash-Commands 12. Flash Commands]
- [#Flash-Configuration-Commands 12.1 Flash Configuration Commands]
- [#Erasing_002c-Reading_002c-Writing-to-Flash 12.2 Erasing, Reading, Writing to Flash]
- [#Other-Flash-commands 12.3 Other Flash commands]
- [#Flash-Driver-List-1 12.4 Flash Driver List]
- [#External-Flash 12.4.1 External Flash]
- [#Internal-Flash-_0028Microcontrollers_0029 12.4.2 Internal Flash (Microcontrollers)]
- [#str9xpec-driver 12.4.3 str9xpec driver]
- [#mFlash 12.5 mFlash]
- [#mFlash-Configuration 12.5.1 mFlash Configuration]
- [#mFlash-commands 12.5.2 mFlash commands]
- [#NAND-Flash-Commands 13. NAND Flash Commands]
- [#NAND-Configuration-Commands 13.1 NAND Configuration Commands]
- [#Erasing_002c-Reading_002c-Writing-to-NAND-Flash 13.2 Erasing, Reading, Writing to NAND Flash]
- [#Other-NAND-commands 13.3 Other NAND commands]
- [#NAND-Driver-List-1 13.4 NAND Driver List]
- [#PLD_002fFPGA-Commands 14. PLD/FPGA Commands]
- [#PLD_002fFPGA-Configuration-and-Commands 14.1 PLD/FPGA Configuration and Commands]
- [#PLD_002fFPGA-Drivers_002c-Options_002c-and-Commands 14.2 PLD/FPGA Drivers, Options, and Commands]
- [#General-Commands 15. General Commands]
- [#Daemon-Commands 15.1 Daemon Commands]
- [#Target-State-handling-1 15.2 Target State handling]
- [#I_002fO-Utilities 15.3 I/O Utilities]
- [#Memory-access-commands 15.4 Memory access commands]
- [#Image-loading-commands 15.5 Image loading commands]
- [#Breakpoint-and-Watchpoint-commands 15.6 Breakpoint and Watchpoint commands]
- [#Misc-Commands 15.7 Misc Commands]
- [#Architecture-and-Core-Commands 16. Architecture and Core Commands]
- [#ARM-Hardware-Tracing-1 16.1 ARM Hardware Tracing]
- [#ETM-Configuration 16.1.1 ETM Configuration]
- [#ETM-Trace-Operation 16.1.2 ETM Trace Operation]
- [#Trace-Port-Drivers-1 16.1.3 Trace Port Drivers]
- [#Generic-ARM 16.2 Generic ARM]
- [#ARMv4-and-ARMv5-Architecture 16.3 ARMv4 and ARMv5 Architecture]
- [#ARM7-and-ARM9-specific-commands 16.3.1 ARM7 and ARM9 specific commands]
- [#ARM720T-specific-commands 16.3.2 ARM720T specific commands]
- [#ARM9-specific-commands 16.3.3 ARM9 specific commands]
- [#ARM920T-specific-commands 16.3.4 ARM920T specific commands]
- [#ARM926ej_002ds-specific-commands 16.3.5 ARM926ej-s specific commands]
- [#ARM966E-specific-commands 16.3.6 ARM966E specific commands]
- [#XScale-specific-commands 16.3.7 XScale specific commands]
- [#ARMv6-Architecture 16.4 ARMv6 Architecture]
- [#ARM11-specific-commands 16.4.1 ARM11 specific commands]
- [#ARMv7-Architecture 16.5 ARMv7 Architecture]
- [#ARMv7-Debug-Access-Port-_0028DAP_0029-specific-commands 16.5.1 ARMv7 Debug Access Port (DAP) specific commands]
- [#Cortex_002dM3-specific-commands 16.5.2 Cortex-M3 specific commands]
- [#Software-Debug-Messages-and-Tracing-1 16.6 Software Debug Messages and Tracing]
- [#JTAG-Commands 17. JTAG Commands]
- [#Low-Level-JTAG-Commands 17.1 Low Level JTAG Commands]
- [#TAP-state-names 17.2 TAP state names]
- [#Boundary-Scan-Commands 18. Boundary Scan Commands]
- [#SVF_003a-Serial-Vector-Format 18.1 SVF: Serial Vector Format]
- [#XSVF_003a-Xilinx-Serial-Vector-Format 18.2 XSVF: Xilinx Serial Vector Format]
- [#TFTP 19. TFTP]
- [#GDB-and-OpenOCD 20. GDB and OpenOCD]
- [#Connecting-to-GDB-1 20.1 Connecting to GDB]
- [#Sample-GDB-session-startup 20.2 Sample GDB session startup]
- [#Configuring-GDB-for-OpenOCD 20.3 Configuring GDB for OpenOCD]
- [#Programming-using-GDB 20.4 Programming using GDB]
- [#Using-openocd-SMP-with-GDB-1 20.5 Using openocd SMP with GDB]
- [#Tcl-Scripting-API 21. Tcl Scripting API]
- [#API-rules 21.1 API rules]
- [#Internal-low_002dlevel-Commands 21.2 Internal low-level Commands]
- [#OpenOCD-specific-Global-Variables 21.3 OpenOCD specific Global Variables]
- [#FAQ 22. FAQ]
- [#Tcl-Crash-Course 23. Tcl Crash Course]
- [#Tcl-Rule-_00231 23.1 Tcl Rule #1]
- [#Tcl-Rule-_00231b 23.2 Tcl Rule #1b]
- [#Per-Rule-_00231-_002d-All-Results-are-strings 23.3 Per Rule #1 - All Results are strings]
- [#Tcl-Quoting-Operators 23.4 Tcl Quoting Operators]
- [#Consequences-of-Rule-1_002f2_002f3_002f4 23.5 Consequences of Rule 1/2/3/4]
- [#Tokenisation-_0026-Execution_002e 23.5.1 Tokenisation & Execution.]
- [#Command-Execution 23.5.2 Command Execution]
- [#The-FOR-command 23.5.3 The FOR command]
- [#FOR-command-implementation 23.5.4 FOR command implementation]
- [#OpenOCD-Tcl-Usage 23.6 OpenOCD Tcl Usage]
- [#source-and-find-commands 23.6.1 source and find commands]
- [#format-command 23.6.2 format command]
- [#Body-or-Inlined-Text 23.6.3 Body or Inlined Text]
- [#Global-Variables 23.6.4 Global Variables]
- [#Other-Tcl-Hacks 23.7 Other Tcl Hacks]
- [#OpenOCD-Concept-Index OpenOCD Concept Index]
- [#Command-and-Driver-Index Command and Driver Index]
Short Table of Contents
- [#About About]
- [#Developers 1. OpenOCD Developer Resources]
- [#Debug-Adapter-Hardware 2. Debug Adapter Hardware]
- [#About-Jim_002dTcl 3. About Jim-Tcl]
- [#Running 4. Running]
- [#OpenOCD-Project-Setup 5. OpenOCD Project Setup]
- [#Config-File-Guidelines 6. Config File Guidelines]
- [#Daemon-Configuration 7. Daemon Configuration]
- [#Debug-Adapter-Configuration 8. Debug Adapter Configuration]
- [#Reset-Configuration 9. Reset Configuration]
- [#TAP-Declaration 10. TAP Declaration]
- [#CPU-Configuration 11. CPU Configuration]
- [#Flash-Commands 12. Flash Commands]
- [#NAND-Flash-Commands 13. NAND Flash Commands]
- [#PLD_002fFPGA-Commands 14. PLD/FPGA Commands]
- [#General-Commands 15. General Commands]
- [#Architecture-and-Core-Commands 16. Architecture and Core Commands]
- [#JTAG-Commands 17. JTAG Commands]
- [#Boundary-Scan-Commands 18. Boundary Scan Commands]
- [#TFTP 19. TFTP]
- [#GDB-and-OpenOCD 20. GDB and OpenOCD]
- [#Tcl-Scripting-API 21. Tcl Scripting API]
- [#FAQ 22. FAQ]
- [#Tcl-Crash-Course 23. Tcl Crash Course]
- [#OpenOCD-Concept-Index OpenOCD Concept Index]
- [#Command-and-Driver-Index Command and Driver Index]
About This Document
This document was generated by Bill Traynor on May 9, 2012 Â Â <== Current Position
- 1.2.4 Subsubsection One-Two-Four
- 1.3 Subsection One-Three
- ...
- 1.4 Subsection One-Four
This document was generated by Bill Traynor on May 9, 2012 using texi2html 1.82. | http://www.elinux.org/OpenOCD_User_Guide | CC-MAIN-2016-18 | refinedweb | 43,358 | 55.95 |
pyvoro 1.3.2
2D and 3D Voronoi tessellations: a python entry point for the voro++ library.pyvoro
======
3D Voronoi tessellations: a python entry point for the [voro++ library]
**Recently Added Features:**
*Released on PyPI* - thanks to a contribution from @ansobolev, you can now install the project with
`pip` - just type `pip install pyvoro`, with sudo if that's your thing.
*support for numpy arrays* - thanks to a contribution from @christopherpoole, you can now pass in
a 2D (Nx3 or Nx2) numpy array.
*2D helper*, which translates the results of a 3D tesselation of points on the plane back into
2D vectors and cells (see below for an example.)
*Radical (weighted) option*, which weights the voronoi cell sizes according to a set of supplied
radius values.
*periodic boundary support*, note that each cell is returned in the frame of reference of its source
point, so points can (and will) be outside the bounding box.
Installation
------------
Recommended - installation via `pip`:
pip install pyvoro
Installation from source is the same as for any other python module. Issuing
python setup.py install
will install pyvoro system-wide, while
python setup.py install --user
will install it only for the current user. Any
[other] `setup.py` keywords
can also be used, including
python setup.py develop
to install the package in 'development' mode. Alternatively, if you want all the dependencies pulled in automatically,
you can still use `pip`:
pip install -e .
`-e` option makes pip install package from source in development mode.
You can then use the code with:
import pyvoro
pyvoro.compute_voronoi( ... )
pyvoro.compute_2d_voronoi( ... )
Example:
--------
```python
import pyvoro
pyvoro.compute_voronoi(
[[1.0, 2.0, 3.0], [4.0, 5.5, 6.0]], # point positions
[[0.0, 10.0], [0.0, 10.0], [0.0, 10.0]], # limits
2.0, # block size
radii=[1.3, 1.4] # particle radii -- optional, and keyword-compatible arg.
)
```
returning an array of voronoi cells in the form:
```python
{ # (note, this cell is not calculated using the above example)
'volume': 6.07031902214448,
'faces': [
{'adjacent_cell': 1, 'vertices': [1, 5, 8, 3]},
{'adjacent_cell': -3, 'vertices': [1, 0, 2, 6, 5]},
{'adjacent_cell': -5, 'vertices': [1, 3, 9, 7, 0]},
{'adjacent_cell': 146, 'vertices': [2, 4, 11, 10, 6]},
{'adjacent_cell': -1, 'vertices': [2, 0, 7, 4]},
{'adjacent_cell': 9, 'vertices': [3, 8, 10, 11, 9]},
{'adjacent_cell': 11, 'vertices': [4, 7, 9, 11]},
{'adjacent_cell': 139, 'vertices': [5, 6, 10, 8]}
],
'adjacency': [
[1, 2, 7],
[5, 0, 3],
[4, 0, 6],
[8, 1, 9],
[11, 7, 2],
[6, 1, 8],
[2, 5, 10],
[9, 0, 4],
[5, 3, 10],
[11, 3, 7],
[6, 8, 11],
[10, 9, 4]
],
'original': [1.58347382116, 0.830481034382, 0.84264445125],
'vertices': [
[0.0, 0.0, 0.0],
[2.6952010660213537, 0.0, 0.0],
[0.0, 0.0, 1.3157105644765856],
[2.6796085747800173, 0.9893738662896467, 0.0],
[0.0, 1.1577688788929044, 0.9667194826924593],
[2.685575135451888, 0.0, 1.2139446383811037],
[1.5434724537773115, 0.0, 2.064891808748473],
[0.0, 1.2236852383897006, 0.0],
[2.6700186049990116, 1.0246853171897545, 1.1392273839598812],
[1.6298653128290692, 1.8592211309121414, 0.0],
[1.8470793965350985, 1.7199178301499591, 1.6938166537039874],
[1.7528279426840703, 1.7963648490662445, 1.625024494263244]
]
}
```
Note that this particle was the closest to the coord system origin - hence
(unimportantly) lots of vertex positions that are zero or roughly zero, and
(importantly) **negative cell ids** which correspond to the boundaries (of which
there are three at the corner of a box, specifically ids `1`, `3` and `5`, (the
`x_i = 0` boundaries, represented with negative ids hence `-1`, `-3` and `-5` --
this is voro++'s conventional way of referring to boundary interfaces.)
Initially only non-radical tessellation, and computing *all* information
(including cell adjacency). Other code paths may be added later.
2D tessellation
---------------
You can now run a simpler function to get the 2D cells around your points, with all the details
handled for you:
```python
import pyvoro
cells = pyvoro.compute_2d_voronoi(
[[5.0, 7.0], [1.7, 3.2], ...], # point positions, 2D vectors this time.
[[0.0, 10.0], [0.0, 10.0]], # box size, again only 2D this time.
2.0, # block size; same as before.
radii=[1.2, 0.9, ...] # particle radii -- optional and keyword-compatible.
)
```
the output follows the same schema as the 3D for now, since this is not as annoying as having a
whole new schema to handle. The adjacency is now a bit redundant since the cell is a polygon and the
vertices are returned in the correct order. The cells look like a list of these:
```python
{ # note that again, this is computed with a different example
'adjacency': [
[5, 1],
[0, 2],
[1, 3],
[2, 4],
[3, 5],
[4, 0]
],
'faces': [
{ 'adjacent_cell': 23, 'vertices': [0, 5]},
{ 'adjacent_cell': -2, 'vertices': [0, 1]},
{ 'adjacent_cell': 39, 'vertices': [2, 1]},
{ 'adjacent_cell': 25, 'vertices': [2, 3]},
{ 'adjacent_cell': 12, 'vertices': [4, 3]},
{ 'adjacent_cell': 9, 'vertices': [5, 4]}
],
'original': [8.168525781010283, 5.943711239620341],
'vertices': [
[10.0, 5.324580764844442],
[10.0, 6.442713105218478],
[9.088894888250326, 7.118847221681966],
[6.740750220282158, 6.444386346261051],
[6.675322891805883, 5.678806294642725],
[7.77400067532073, 5.02320427474993]
],
'volume': 5.102702932807149
}
```
*(note that the edges will now be indexed -1 to -4, and the 'volume' key is in fact the area.)*
NOTES:
* on compilation: if a cython .pyx file is being compiled in C++ mode, all cython-visible code must be compiled "as c++" - this will not be compatible with any C functions declared `extern "C" { ... }`. In this library, the author just used c++ functions for everything, in order to be able to utilise the c++ `std::vector<t>` classes to represent the (ridiculously non-specific) geometry of a Voronoi cell.
* A checkout of voro++ itself is included in this project. moving `setup.py` and the `pyvoro` folder into a newer checkout of the voro++ source may well also work, but if any of the definitions used are changed then it will fail to compile. by all means open a support issue if you need this library to work with a newer version of voro++; better still fix it and send me a pull request :)
- Downloads (All Versions):
- 5 downloads in the last day
- 43 downloads in the last week
- 156 downloads in the last month
- Author: Joe Jordan
- Download URL:
- Keywords: geometry,mathematics,Voronoi
- Package Index Owner: joe-jordan
- DOAP record: pyvoro-1.3.2.xml | https://pypi.python.org/pypi/pyvoro/1.3.2 | CC-MAIN-2015-32 | refinedweb | 1,031 | 58.72 |
I'm not sure if the title is right, but
I'm migrating a system made of FuelPHP to the server, but the following error has occurred and I can't get it to work properly.
Fatal error: Access to undeclared static property: Controller_Auth :: $this in ... (abbreviated)
The Controller_Auth class inherits from Basecontroller.
It seems that an error has occurred in the following description part in Basecontroller.
if (method_exists (static :: $this, 'before_controller')) { static :: before_controller (); }
I have investigated a lot, but is it NG to use "::" to call a method that doesn't declare static?
There was certainly no method that declared static in Controller_Auth.
If i don't know how to rewrite, and if you know how to rewrite, please let me know.
- Answer # 1
Related articles
- php - how to convert collection class to builder class with laravel
- access modifiers in php class inheritance
- i want to call a class in php
- Common methods for calling shell commands in Python (4 types)
- php - class not found when using namespace in exception handling
- php - another class with the same mechanism it doesn't work well when used together
- php - syntax error occurs when calling with require_once
- in c#, i want to give parents the methods and properties of the class that the generic contains
- db access using thread class in php
- i want to add a guard to the state transition of class in php
- php - about class access after instantiation
- centralization and calling of code using functionphp in wordpress
- php - how to write when calling a property from an instance
- php - i want to know how to add a special class to only one page in wordpress
- java - i want to put two methods in the graphics class, can anyone please tell me how
- php - class definition related occurrence of unexpected t_variable, expecting t_function
- php - calling and displaying data from sql does not work
- php - in the definition of the class, notice: undefined variable: ingredients is displayed
static :: $thisrecognizes the PHP 5.3 series andhalfwayin PHP 5.4 to 5.5.
An error occurs in PHP 5.4.22, PHP 5.5.6 or higher (including PHP 5.6 of course) with the change of # 65911 (3v4l).
And
method_exists's argument is just
$this. Looks like (3v4l). | https://www.tutorialfor.com/questions-100493.htm | CC-MAIN-2020-40 | refinedweb | 377 | 57 |
Deferred … Continue reading Deferred Execution and Immediate Execution in LINQ
Var versus IEnumerable in LINQ
We have seen normally Or Now question is where to use var or IEnumerable? Let us first have a look on both key words Var derives type from the right hand side. And its scope is in the method. And it is strongly typed. IEnumerable is interface which allows forward movement in the collection. Now … Continue reading Var versus IEnumerable in LINQ
Listing columns name with type of a table using LINQ
After my last post, I got a question asking how to list all the Column names of a given table using LINQ. Before going through this post, I recommend you to read Get All the Tables Name using LINQ Let us say, you have a table called dbo.Person and you need to find all the … Continue reading Listing columns name with type of a table using LINQ
Get All the Tables Name using LINQ
We need to list all the table name of the DataContext. This is very simple through LINQ mapping. 1. Add the namespace 2. Get the model from type of datacontext DataClasses1DataContext is your datacontext class created by LINQ to SQL. 3. Use GetTables() method to list all the tables name using System; using System.Collections.Generic; using … Continue reading Get All the Tables Name using LINQ
WCF Service library: Creating, Hosting and consuming WCF service with WCF Service Library project template
In this article we will walkthrough on creating a WCF Service by choosing WCF Service Library project template. Step 1 Create a WCF Service Library project. Delete all the default codes created by WCF. Step 2 Modify Operation Contract as below, Implement the service as below Step 3 Leave App.config file as it is with … Continue reading WCF Service library: Creating, Hosting and consuming WCF service with WCF Service Library project template | https://debugmode.net/2010/12/ | CC-MAIN-2017-30 | refinedweb | 316 | 61.26 |
class Animal1{
protected Animal1() {
}
void eat()throws IOException{System.out.println("Animal Eat....");}
}
}
public class Dog extends Animal1{
public void eat() throws FileNotFoundException{
super.eat(); //this line generates an error
System.out.println("Dog Eat.....");
}
}
Chiranjeevi Kanthraj wrote:dilan welcome to Ranch
if you want to over ride you can not throw the Subclass of IOException
so you can throw IOException in the sub class Dog for eat method
or if you are not overriding then you have to catch IOException, because Dog class eat method is thows only FileNotFoundException which is Sub class of IOException
Campbell Ritchie wrote:Welcome again
No, you are not correct. Because FileNotFoundException is a subclass of IOException, it is permissible to declare it there.
You haven't told us the details, for example: what sort of error are you suffering? | http://www.coderanch.com/t/552944/java/java/calling-super-method-subclass-overridn | CC-MAIN-2013-48 | refinedweb | 136 | 50.46 |
Cleaning High Ascii Values For Web Safeness In ColdFusion
On Tuesday, Ray Camden came to speak at the New York ColdFusion User Group about RSS feed parsing and creation. During the presentation, he talked about RSS breaking when ColdFusion hits a high ascii value that it doesn't recognize (and it just skips it or something, which will break the XML). This got me thinking about how to remove high ascii values from a string, or at least to clean them to be more web safe. I think we have all done this before, where we have to clean FORM submission data to make sure someone didn't copy and paste from Microsoft Words and get some crazy "smart" characters. In the past, to do this, I have done lots of Replace() calls on the FORM values.
But, Tuesday night, when Ray was speaking, I started, as I often do, to think about regular expressions. Regular expressions have brought so much joy and happiness into my life, I wondered if maybe they could help me with this problem as well. And so, I hopped over to the Java 2 Pattern class documentation to see what it could handle. Immediately, I was quite pleased to see that it had a way to match patterns based on the hexadecimal value of characters:
\xhh - The character with hexadecimal value 0xhh
Since we all know about the ASCII table and how to convert decimal values to hexadecimal values (or at least how to look it up), we can easily put together a regular expression pattern to match high ascii values. The only super, ultra safe characters are the first 128 characters (ascii values 0 - 127); these are HEX values 00 to 7F. Taking that information, we can now build a pattern that matches characters that are NOT in that ascii value range:
[^\x00-\x7F]
With that pattern, we are going to have access not only to the high ascii values we know exist (such as the Microsoft Smart Quotes), we are going to also have access to all the random high ascii values that people randomly enter with their data. This means that we are not going to let anything slip through the cracks.
To encapsulate this functionality, I have create a ColdFusion user defined function, CleanHighAscii():
<cffunction name="CleanHighAscii" access="public" returntype="string" output="false" hint="Cleans extended ascii values to make the as web safe as possible."> <!--- Define arguments. ---> <cfargument name="Text" type="string" required="true" hint="The string that we are going to be cleaning." /> <!--- Set up local scope. ---> <cfset var LOCAL = {} /> <!--- When cleaning the string, there are going to be ascii values that we want to target, but there are also going to be high ascii values that we don't expect. Therefore, we have to create a pattern that simply matches all non low-ASCII characters. This will find all characters that are NOT in the first 127 ascii values. To do this, we are using the 2-digit hex encoding of values. ---> <cfset LOCAL.Pattern = CreateObject( "java", "java.util.regex.Pattern" ).Compile( JavaCast( "string", "[^\x00-\x7F]" ) ) /> <!--- Create the pattern matcher for our target text. The matcher will be able to loop through all the high ascii values found in the target string. ---> <cfset LOCAL.Matcher = LOCAL.Pattern.Matcher( JavaCast( "string", ARGUMENTS.Text ) ) /> <!--- As we clean the string, we are going to need to build a results string buffer into which the Matcher will be able to store the clean values. ---> <cfset LOCAL.Buffer = CreateObject( "java", "java.lang.StringBuffer" ).Init() /> <!--- Keep looping over high ascii values. ---> <cfloop condition="LOCAL.Matcher.Find()"> <!--- Get the matched high ascii value. ---> <cfset LOCAL.Value = LOCAL.Matcher.Group() /> <!--- Get the ascii value of our character. ---> <cfset LOCAL.AsciiValue = Asc( LOCAL.Value ) /> <!--- Now that we have the high ascii value, we need to figure out what to do with it. There are explicit tests we can perform for our replacements. However, if we don't have a match, we need a default strategy and that will be to just store it as an escaped value. ---> <!--- Check for Microsoft double smart quotes. ---> <cfif ( (LOCAL.AsciiValue EQ 8220) OR (LOCAL.AsciiValue EQ 8221) )> <!--- Use standard quote. ---> <cfset LOCAL. <!--- Check for Microsoft single smart quotes. ---> <cfelseif ( (LOCAL.AsciiValue EQ 8216) OR (LOCAL.AsciiValue EQ 8217) )> <!--- Use standard quote. ---> <cfset LOCAL. <!--- Check for Microsoft elipse. ---> <cfelseif (LOCAL.AsciiValue EQ 8230)> <!--- Use several periods. ---> <cfset LOCAL. <cfelse> <!--- We didn't get any explicit matches on our character, so just store the escaped value. ---> <cfset LOCAL. </cfif> <!--- Add the cleaned high ascii character into the results buffer. Since we know we will only be working with extended values, we know that we don't have to worry about escaping any special characters in our target string. ---> <cfset LOCAL.Matcher.AppendReplacement( LOCAL.Buffer, JavaCast( "string", LOCAL.Value ) ) /> </cfloop> <!--- At this point there are no further high ascii values in the string. Add the rest of the target text to the results buffer. ---> <cfset LOCAL.Matcher.AppendTail( LOCAL.Buffer ) /> <!--- Return the resultant string. ---> <cfreturn LOCAL.Buffer.ToString() /> </cffunction>
Here, we are checking for some very specific ascii values (all Microsoft characters), but if we cannot find an explicit match, we do our best to provide a web-safe character by returning the alternate escaped ascii value (&#ASCII;). Let's take a look in this in action:
<!--- Set up text that has foreign characters. These foreign characters are in the "extended" ascii group. ---> <cfsavecontent variable="strText"> Bonjour. Vous êtes très mignon, et je voudrais vraiment votre prise en mains (ou, à s'emparer de vos fesses, même si je pense que peut-être trop en avant à ce moment). </cfsavecontent> <!--- Output the cleaned value. ---> #CleanHighAscii( strText )#
We are passing our French text to the function and then outputting the result. Here is what the resultant HTML looks like:).
Notice that the high ascii values (just the extended ones in our case) were replaced with their safer escaped value counterparts.
As you find more special characters that you need to work with, you can, of course, update the CFIF / CFELSEIF statements in the function; but, until you do that, I think this provides a safer way to handle high ascii values on the web. At the very least, it's cool to see that regular expressions can make our lives better yet again.
Reader Comments
Very, very nice. Getting the range working was something I had issues with when I tried this last time - but seeing it now it looks so simple! :) I'm going to update toXML later today to include this code.
@Ray,
Glad you like it :)
I wrote these 2 functions for this similar problem. (not sure if they will paste correctly or not):
<samp>
<cffunction name="replaceNonAscii" returntype="string" output="false">
<cfargument name="argString" type="string" default="" />
<cfreturn REReplace(arguments.argString,"[^\0-\x80]","","all") />
</cffunction>
<cffunction name="replaceDiacriticMarks" returntype="string" output="false">
<cfargument name="argString" type="string" default="" />
<!--- Declare retString --->
<cfset var retString = arguments.argString />
<!--- Do Replaces --->
<cfset retString = REReplace(retString,"#chr(192)#|#chr(193)#|#chr(194)#|#chr(195)#|#chr(196)#|#chr(197)#|#chr(913)#|#chr(8704)#","A","all") />
<cfset retString = REReplace(retString,"#chr(198)#","AE","all") />
<cfset retString = REReplace(retString,"#chr(223)#|#chr(914)#|#chr(946)#","B","all") />
<cfset retString = REReplace(retString,"#chr(162)#|#chr(169)#|#chr(199)#|#chr(231)#|#chr(8834)#|#chr(8835)#|#chr(8836)#|#chr(8838)#|#chr(8839)#|#chr(962)#","C","all") />
<cfset retString = REReplace(retString,"#chr(208)#|#chr(272)#","D","all") />
<cfset retString = REReplace(retString,"#chr(200)#|#chr(201)#|#chr(202)#|#chr(203)#|#chr(8364)#|#chr(8707)#|#chr(8712)#|#chr(8713)#|#chr(8715)#|#chr(8721)#|#chr(917)#|#chr(926)#|#chr(931)#|#chr(949)#|#chr(958)#","E","all") />
<cfset retString = REReplace(retString,"#chr(294)#|#chr(919)#","H","all") />
<cfset retString = REReplace(retString,"#chr(204)#|#chr(205)#|#chr(206)#|#chr(207)#|#chr(8465)#|#chr(921)#","I","all") />
<cfset retString = REReplace(retString,"#chr(306)#","IJ","all") />
<cfset retString = REReplace(retString,"#chr(312)#|#chr(922)#|#chr(954)#","K","all") />
<cfset retString = REReplace(retString,"#chr(319)#|#chr(321)#|#chr(915)#","L","all") />
<cfset retString = REReplace(retString,"#chr(924)#","M","all") />
<cfset retString = REReplace(retString,"#chr(209)#|#chr(330)#|#chr(925)#","N","all") />
<cfset retString = REReplace(retString,"#chr(210)#|#chr(211)#|#chr(212)#|#chr(213)#|#chr(214)#|#chr(216)#|#chr(920)#|#chr(927)#|#chr(934)#","O","all") />
<cfset retString = REReplace(retString,"#chr(338)#","OE","all") />
<cfset retString = REReplace(retString,"#chr(174)#|#chr(8476)#","R","all") />
<cfset retString = REReplace(retString,"#chr(167)#|#chr(352)#","S","all") />
<cfset retString = REReplace(retString,"#chr(358)#|#chr(932)#","T","all") />
<cfset retString = REReplace(retString,"#chr(217)#|#chr(218)#|#chr(219)#|#chr(220)#","U","all") />
<cfset retString = REReplace(retString,"#chr(935)#|#chr(967)#","X","all") />
<cfset retString = REReplace(retString,"#chr(165)#|#chr(221)#|#chr(376)#|#chr(933)#|#chr(936)#|#chr(947)#|#chr(978)#","Y","all") />
<cfset retString = REReplace(retString,"#chr(918)#|#chr(950)#","Z","all") />
<cfset retString = REReplace(retString,"#chr(170)#|#chr(224)#|#chr(225)#|#chr(226)#|#chr(227)#|#chr(228)#|#chr(229)#|#chr(945)#","a","all") />
<cfset retString = REReplace(retString,"#chr(230)#","ae","all") />
<cfset retString = REReplace(retString,"#chr(273)#|#chr(8706)#|#chr(948)#","d","all") />
<cfset retString = REReplace(retString,"#chr(232)#|#chr(233)#|#chr(234)#|#chr(235)#","e","all") />
<cfset retString = REReplace(retString,"#chr(402)#|#chr(8747)#","f","all") />
<cfset retString = REReplace(retString,"#chr(295)#","h","all") />
<cfset retString = REReplace(retString,"#chr(236)#|#chr(237)#|#chr(238)#|#chr(239)#|#chr(305)#|#chr(953)#","i","all") />
<cfset retString = REReplace(retString,"#chr(307)#","j","all") />
<cfset retString = REReplace(retString,"#chr(320)#|#chr(322)#","l","all") />
<cfset retString = REReplace(retString,"#chr(241)#|#chr(329)#|#chr(331)#|#chr(951)#","n","all") />
<cfset retString = REReplace(retString,"#chr(240)#|#chr(242)#|#chr(243)#|#chr(244)#|#chr(245)#|#chr(246)#|#chr(248)#|#chr(959)#","o","all") />
<cfset retString = REReplace(retString,"#chr(339)#","oe","all") />
<cfset retString = REReplace(retString,"#chr(222)#|#chr(254)#|#chr(8472)#|#chr(929)#|#chr(961)#","p","all") />
<cfset retString = REReplace(retString,"#chr(353)#|#chr(383)#","s","all") />
<cfset retString = REReplace(retString,"#chr(359)#|#chr(964)#","t","all") />
<cfset retString = REReplace(retString,"#chr(181)#|#chr(249)#|#chr(250)#|#chr(251)#|#chr(252)#|#chr(956)#|#chr(965)#","u","all") />
<cfset retString = REReplace(retString,"#chr(957)#","v","all") />
<cfset retString = REReplace(retString,"#chr(969)#","w","all") />
<cfset retString = REReplace(retString,"#chr(215)#|#chr(8855)#","x","all") />
<cfset retString = REReplace(retString,"#chr(253)#|#chr(255)#","y","all") />
<!--- ' --->
<cfset retString = REReplace(retString,"#chr(180)#|#chr(8242)#|#chr(8216)#|#chr(8217)#","#chr(39)#","all") />
<!--- " --->
<cfset retString = REReplace(retString,"#chr(168)#|#chr(8220)#|#chr(8221)#|#chr(8222)#|#chr(8243)#","#chr(34)#","all") />
<cfreturn retString />
</cffunction>
</samp>
You can call one after the other. I usually call the Trim() function as well. The replaceDiacriticMarks() function replaces character with their "similar' standard ascii values, so è gets turned into e.
@Jeff,
Looking pretty cool. I like that you get the "like looking" letters for foreign characters. I can really see where that would be good to have on hand.
@Ben,
I still think it's bad to be doing *that* many replaces, but for my needs it works (seems to perform well/quickly).
The replaceAscii() just gets rid of the chars completely, which is probably worse than what your doing...
I usually just call this function (which uses the 2 other functions I pasted in):
<cffunction name="safeForXml" returntype="string" output="false">
<cfargument name="argString" type="string" default="" />
<cfset var retString = arguments.argString />
<cfset retString = replaceDiacriticMarks(retString) />
<cfset retString = replaceNonAscii(retString) />
<cfset retString = Trim(retString) />
<cfreturn retString />
</cffunction>
@Jeff,
That sounds fair to me. That pretty cool that you already knew how to use the HEX values in the regex. I think that's way awesome. I love regular expressions.
I think the only real difference that mine has is that it has a default case that replace any high ascii values that were not accounted for. Other than that, I think we are headed in the same direction.
Rather than doing all of those cfsets and replaces, could you perhaps return an array of matched high ascii characters (just the numeric values), then loop over that list and replace those values with matching values from another array of replacements?
e.g.
<cfloop array="#matchingHighAscii#" index="value">
<cfset myString = Replace(myString, "" & value & ";", lowAscii[value], "ALL")>
</cfloop>
@Gareth,
That would definitely be much cleaner, and easier to read/add to. Thanks.
Wow, very timely entry, Ben! Yesterday I discovered a need to convert extended ASCII characters into their HTML encoded entities... and then I saw this post in my feed reader. Thanks. This is perfect.
@Richard,
Glad to be of help. Be sure to read the comments as well as some other very good ideas were presented.
Thanks guys for all your comments. I have been working at a solution of my own, inspired by some of the solutions here. I specifically wanted to convert all high characters with a value greater then 127 into XML safe hexadecimal characters. I came up with a small solution that doesn't rely on any external objects yet was based on a regular expression.
It can be found at my blog ..
@Jeff,
You saved me countless hours of heartache tonight. Thanks
DB "bad" bytes, you're snagging only the first byte out of a multi-byte character. It can lead to garbage characters in your output.
You're also left completely without the ability to handle many foreign languages, your code can only handle latin characters, and that's just too restrictive for many folks.
UTF-8 is the most common international multi-byte character encoding since it's incredibly comprehensive (UTF-16 is yet more so, but I believe it mostly only includes traditional Chinese characters [1 distinct character for each possible word] over UTF-8, and even most Chinese sites go with modern Chinese).
You'll discover that putting <cfprocessingdirective pageencoding="utf-8"> in the first 1024 bytes of each template will cause CF and all of CF's string functions to correctly handle "high-ascii" characters. You will also need to alert the browser (or other consumer of your data) what the character set is. There are various methods to do this, depending on what type of content you're sending. For most types of content, a simple way is to use <cfcontent type="(content-type such as text/html); charset=utf-8">. There are also HTML head entries which can specify the character encoding, and an XML processing instruction for this purpose too (<?xml version="1.0" encoding="utf-8" ?>) which would cover your RSS feeds.
Man, I worked long and hard on getting regular expressions to sanitize my data, and the outcome of that is that it's simply virtually impossible with regular expressions, you really need a string system which treats multi-byte characters as a single character, and the way to do that in ColdFusion is with the aforementioned processing directive.
Eric, it seems like you are saying that simply adding the cfcontent, and the charset in the <xml> tag, would solve _everything_. Is that true? If so - that doesn't seem to be what I'm seeing locally. I know BlogCFC does this (although it uses cfprocessingdirective instead of specifying it on cfcontent) and folks still have issues with the 'high ascii'.
You have to both do a <cfprocessingdirective> (this tells ColdFusion what character set it's working with), and also inform the browser (or other client) what character set you're sending them (since once you set the character encoding with <cfprocessingdirective>, it will send characters to the client in that encoding for any output initiated from that template). Both sides have to know what the encoding is, and have to agree.
Also remember that <cfprocessingdirective> has to appear at the top of every template which may be doing any work with multi-byte strings (whether with string-related functions in ColdFusion, or outputting multi-byte strings to the browser). By multi-byte strings, I mean anything dealing with characters over 0x7F (character 128 and beyond, right well past character 256). Notably for UTF-8, character 128 uses two bytes, but does not have to be represented with the HTML entity €.
For more information on how to inform various clients which character encoding you're sending them, see this W3C article:
I usually stick to pure UTF-8, it has never been insufficient for my needs. Ideally you'd want this to be some sort of application configuration preference, but in reality that may be unrealistic to your code, and as I mentioned earlier, UTF-8 will cover 99.9% of what anyone in the world would want to do; those who want to do something which requires an even larger character set typically already know how to do so.
So a little background.
The way that UTF-8 works is that the high-bit of the first byte in a given character indicates whether it is a member of a multi-byte character.
In binary, single characters may be made up of the following byte sequences:
0zzzzzzzz
110yyyyy 10zzzzzz
1110xxxx 10yyyyyy 10zzzzzz
11110www 10xxxxxx 10yyyyyy 10zzzzzz
where of course the Z's are the low-order bits, Y's the next highest order, X's the next, and W's the highest (well, the bit order can be changed, but I won't get into that for now, see for more information). Bytes whose two highest bits are 10 are members of a multi-byte string (this enables us to detect if some form of truncation has left us with only a partial character).
UTF-8 string parsers examine a byte to look for the number of bytes to consume to act as a single character. Even though these characters are made up of multiple bytes, they are treated as a single character.
If you are dealing with UTF-8 strings, and you fail to tell ColdFusion that you are doing so with this processing instruction, then it will treat a multi-byte character as a single byte character, which is where string replacements and regular expressions and the like can start creating invalid data. In particular, HTMLEditFormat will break a multi-byte character if you don't have the correct processing instruction set. It will potentially encode individual bytes of a multi-byte character into separate entities (which then get treated as individual characters).
HTML as usual is incredibly forgiving about such things (though the characters may still look like garbage, at least it doesn't explicitly error). XML parsers tend to be incredibly unforgiving about such things, which could explain why you're seeing this when dealing with RSS.
If you correctly implement character encoding in this way, you'll find you don't have to perform black magic with trying to convert characters, ColdFusion (or more specifically Java's string implementation under the hood) will automagically handle all of this for you.
Interesting.
So I do NOT want to hijack this into a BlogCFC thing, but if someone on this thread uses 'high chars' a lot and would like to help me test, let me know. It sounds like all I need is a <cfcontent> tag.
Also, Eric, lets say you are building a feed and have no idea what the data is. You could have Chinese chars. You could have funky MS word chars.
Would you recommend this technique to cover all the bases?
It _sounds_ like we may have a perfect solution here, and if so, I need to quickly blog it and take credit.
When you are accepting character data from a remote source, you must know what the character encoding is (well, you have to know what it is for your own data too, but fortunately you get to control that). A valid remote source will specify what the encoding is either with a content-type, or via an embedded mechanism as defined by that type of data.
Note that I haven't had perfect success with <cfcontent> with specifying character encoding (in particular, this encoding is likely to get lost if the content is separated from the HTTP headers, such as occurs when you save a HTML file), I strongly suggest you also specify the character encoding with some meta data native to the format you're producing. EG, for XML, <?xml version="1.0" encoding="utf-8" ?>, for HTML, <meta http-.
Most of the time encoding will be detected automatically for you. By the time the string gets into your hands as a ColdFusion developer, this will have been resolved for you. For example, when the browser submits form data, it gets to specify what encoding it's submitting the data as. ColdFusion will decode this automatically, and effectively (though not really) it treats each character as if it was a 32 bit character. This is transparent to you, and characters are not typically represented in memory as byte arrays, but character arrays (that is to say, memory representation is agnostic to the encoding, and it is converted into an encoding upon output). In reality, that's not actually the case, but the standard specifies that it has to at least behave as if it is.
So the question is, I'm getting data from somewhere, and it might be in almost any encoding, how do I handle this? The answer is that if you're using standard libraries (like ColdFusion's XML parser), it's probably handled for you, and you can ignore it. If you're dealing with raw data (such as if you read a file off the disk which doesn't contain any character encoding hints), it may be more complicated than that.
In a circumstance where metadata is missing about what the character encoding is, it is not necessarily possible to reconstruct the original string correctly. There are really fabulous libraries in the C world which handle this. I'm thinking of iconv and the like. They can guess the character encoding by seeing if the bytes match a specific encoding, but the problem is that it is sometimes possible for a string to be valid in more than one encoding (encoding only talks about how do we represent in 8-bit units character data which cannot natively fit in those 8-bit units). iconv is really good, but has been known to make mistakes, which is why if you have some way to definitively determine the character encoding, you should use it instead (plus this is faster than the tests which are necessary to guess encoding). I don't know if there is a Java equivalent for iconv
Notably, it is an error for someone to provide you multi-byte data without telling you what encoding they are sending it to you in. The same data can be represented in a variety of encodings, and the actual bytes will be different, but the decoded value will be the same.
In the Java world, you can switch through encodings on the fly as you instantiate various objects which might need to output in various encodings, in ColdFusion, you're locked per-page (and you can't set it dynamically, this is a compile-time directive).
By the way, here is a short and simple example script showing that you can have high-value characters:
<cfprocessingdirective pageencoding="utf-8">
<cfcontent type='text/html; charset=utf-8'>
<html>
<head>
<meta http-
<style type="text/css">
.c {
display: block;
width: 60px;
float: left;
border: 1px solid #DDDDDD;
}
.c b {
margin-right: 5px; border-left: 1px dotted #DDDDDD;
}
</style>
</head>
<body>
<cfoutput>
<cfloop from="1" to="2048" index="x">
<div class="c">#x##chr(x)#</div>
</cfloop>
</cfoutput>
</body>
</html>
Replace all 3 instances of utf-8 with ascii, and see what difference it makes (CF will represent characters which don't fit in the ascii character set as a question mark ? ). Note that I'm not using HTML entities here, I'm just using straight up characters with character values greater than 255.
Also try iso-8859-1 and utf-16, save its output and open it with a hex editor to compare the different encodings for the same data; the bytes will be different, but it will show the same in the browser (as long as it's a character set which supports all the displayed characters). Notice that in UTF-8, when you exceed character 127, the character output starts taking up 2 bytes. (Note, you might not be able to trust your browser to save the byte stream the same way it received it, it might translate it to its preferred encoding before saving, you may have to use a command line tool like netcat or wget).
It may help to pay attention to the "encoding" portion of "character encoding," that is to say, it's like <cfwddx>. You can represent an object in text which is not natively text. Likewise, you can represent a string in 8-bit which uses more distinct characters than 8-bit natively supports.
So you use <cfwddx> to encode your complex data type into text. You use encoding to encode thousands of distinct characters using only 256 distinct bytes. It's encoded. You don't modify it in its encoded form, you modify it in its decoded form, and when you're done, you re-encode it so you can send it across a limited medium.
Once decoded, strings in memory do not remember that they were once UTF-8 or ISO-8859-1. You tell them to become this again when you're ready to transmit them. You tell ColdFusion how to output these strings in a way the browser (or other client) will like with <cfprocessingdirective pageencoding="utf-8"> (which I believe also controls its default handling for data streams with no native encoding specification).
I hope I haven't strayed too far off-topic or posted too many really lengthy diatribes. My coworkers will see this, and know it's my writing even though it doesn't have my last name associated with it (they'd probably know it even if it had no name associated with it).
Well, I know this is old-hat by this point, but I'm finding that a lot of developers don't know much or anything about the difference between a byte and a character, or the difference between Unicode, and encoding that Unicode (such as with UTF-8).
I thought I'd include a link to a recent blog I did about this:
This post is just about what Unicode is, and what character encodings mean. I have a follow-up scheduled to this post for tomorrow which talks about UTF-8 specifically.
If you're a developer, especially a web developer, this is essential knowledge, and I'll be happy to try to answer any questions you might have.
@Eric,
I will check this out. I know that the concept of encoding and different character bytes definitely confuses me. I never learned it, and have never had to learn it too much. I will take a look at your post. Thanks.
@Eric
I'm trying to make a demo of your technique for my preso tomorrow but I'm having trouble. Your test script works well. But this sample does not. I don't see ? marks, but other odd marks instead of the proper foreign characters. What am I missing?
>
@Ray: I'll email you directly since you're up against a deadline and that's probably faster than going back and forth here. We can post a digest here once we get it settled. Just wanted to let you know to look in your email (gmail) in case you missed it.
I am trying to strip out a character which I presume to be unicode but not sure. If I take it out of notepad and paste it into my homesite, it shows as a blank space. How do I escape these types of characters? Here is what the character looks like -
If your trying to use special/high characters it is best not to copy them from other sources as they will probably be encoded in their own character sets which are often incompatible with each other. This is why you return a blank space. For example by default ColdFusion encodes pages in Western (Latin 1) / ISO-8859-1 which is completely different to Unicode.
I suggest to use the character you want. Run the Windows application 'Character Map'. Select the font you are planning to use in your page such as Arial, and then change the 'character set' to 'Windows: Western'.
From there find the • character you are after and 'select' 'copy' it to your clipboard and then paste it into your page.
I don't disagree with you, however • is just one instance and there are other characters like that
Well you will still have to use the character map to find out the uni character code. For example
• is U+01BE
So in HTML4 you would escape it by using ƾ
® is U+00AE so that would be ® in HTML4
Oops that didn't display correctly
• is U+2022
So in HTML4 you would escape it by using AMP#x2022;
® is U+00AE so that would be AMP#xAE; in HTML4
(replace AMP with an &)
Ok, I guess my thing is, that I can escape that character, but there are other characters that come up from time to time so its not just that. Is there a regex statement that says only allow letters, numbers, characters that show up on a PC keyboard?
@Michael,
Yeah, you can do that. In the above example, I am using a regular expression that finds characters NOT in that range:
[^\x00-\x7F]
To get the characters that DO fall in that range, just remove the "not":
[\x00-\x7F]
@Raymond Camden,
Can you please tell me if you have a solution for this.
<cfprocessingdirective pageencoding="utf-8">
<cfset myString = "The Islamic Republic of Mauritania's (République Islamique de Mauritanie) 2007 estimated population is 3,270,000. Also check Côte d'Ivoire">
<cfset myNewString = xmlFormat(myString)>
<cfoutput>#myNewString#</cfoutput>
@Raymond Camden,
I am sorry for that.. i am trying to ask if you have the solution for this.
>
@Ramakrishna
It looks like you'll have a new line before your processing instruction.
These would need to be on the same line:
<cfsavecontent><?xml
Also you need to be sure that you're saving the file in UTF-8 if you're telling <cfprocessingdirective> that the file is UTF-8 (check your editor preferences). Personally I use UTF-8 for absolutely everything. It covers the entire Unicode character set, and uses one byte per character for the majority of the characters typically found in output for most western languages.
Ben you rock! I have been banging my head on my desk for months and poof you fixed it for me!
THANKS!!!!!!!!!!!!!!
@Tiffo,
Awesome my man. Glad to help!
Extremely useful. Already spent a few hours trying to fix this myself. I especially like the french prose.
I am confused about a result I am getting after experimenting with Ben's code. I am trying to insert the following into a sql db and jam it in an xml doc:
S.à r.l.
If I use Ben's code and <cfoutput>#CleanHighAscii( strText )#</cfoutput>, then there is no issue. The output and viewing the page source are as expected and lovely.
However, after it is inserted into the database AND if you cfdump the above variable, rather than cfoutput, I get the following:
S.à r.l.
If you view the source on the above it is:
S.à
For some reason, after inserting into the db and during a cfdump, the & get replaced with &
Boy, I would appreciate any help or comments. Very frustrating.
Darn it, the examples didn't come out right above. You can see it here:
Although, for some reason the accent changed when I set up this test page. But the general idea is illustrated.
Robert, Ben's code replaces any character over U+007F (anything over the first 128 characters) with the { equivalent. Your à character is one such character, and encodes as à CFDumping a string is essentially equivalent to outputting the HTMLEditFormat() for the same string. The character doesn't exist in the string any longer, only the HTML entity equivalent.
bah, with the { equivalent! Ben, get us a comment preview function ;-)
Ah, ok I get that. Thanks Eric.
Is there a way to preserve it so it inserts correctly into the sql table?
So that the data in the SQL is not the HTML entity encoded format? There is much to learn about character encodings to adequately debug where character encoding may be going wrong. The first thing you might consider checking though is that you have "String Format: Enable High ASCII characters and Unicode for data sources configured for non-Latin characters" enabled in your data source. You'll also need to be sure your SQL Server is configured to accept Unicode (eg, you store values as UTF-8 or UTF-16). Plus you'll want to be certain that you're storing the values in a column which can contain Unicode (such as nvarchar2).
Weird things start to happen when any part of the stack doesn't support extended characters. When in doubt make everything UTF-8.
Also spend some time reading up on Unicode and UTF-8 (shameless plug):
If you spend any time working with International languages at all, this stuff is absolutely required knowledge. The alternative is banging on it till it works, not understanding why it works, and later discovering it only works part way.
Thanks Eric! Already, those first two tips were brand new to me.
I'll start to edu-ma-cate mines self. :)
Thanks again.
An observation for future generations.
I don't think my issue is related to anything I've talked about above. Rather, it seems XmlFormat() is escaping the string twice. So what should be an ampersand is become ampersandSemicolon
Anyone have a function for unescaping the string back into the high ascii values?
While I have used Ben's solution to resolve some issues with those mysterious Microsoft characters, I also tried some experimentation with UTF-8. As my clients use a WYSIWYG Editor to enter content, they will often copy straight from MS Word even with a special button to remove said characters.
As much as I try to use the UTF-8 suggestion from Eric, I can't get it to work. I was, however, able to get my websites to read correctly if I used iso-8859-1.
Being that UTF-8 is the preferred. What do I need to do to get it to work with my websites?
David, you might try out the "setEncoding" function in ColdFusion:, the problem as you've probably discovered is that the browser is most likely submitting data on the form as ISO-8859-1, and when you regurgitate the same bytes but declare them to be UTF-8 encoding, you get some odd characters.
So something like setEncoding("FORM", "ISO-8859-1") then otherwise using UTF-8 for everything else should cause ColdFusion to have correctly interpreted the values in their native encoding.
@David,
I will defer to @Eric on this. Encoding as a concept is not something which I have fully wrapped my head around yet.!
After reading this thread i have no idea what is going on with charset encoding or what to do about outputting cms data. I dont see how you guys have time for this stuff...
@Pete,
Good stuff, glad to help.
@Rob,
I totally relate; character encoding is something that I only have a superficial understanding of. I only just recently started using UTF-8 on my databases! Then there's the page encoding and the HTML Meta encoding. Ug, it's still a bit murky for me..5 and CF8. Unicode filenames are served up by IIS7.x .
Anyone else run into this? Is this possible with CF8 (on IIS)?
@Dan,
I have not run into this problem my self. Hmm, not sure what to even suggest.
@Ben,
Your solution worked great for me. Thanks!
Paul
@Dan,
You say that your using utf-8 and unicode in your database. Are your database columns nvarchar instead of varchar? Check your meta tags, make sure your charset=utf-8. characters in place of the one Chinese one). This aspect is much less important than getting these characters into the url.
My conclusion to-date is CF8 can't do it. IIS7x can.
For others moving this way I'd add a couple of points to some of the steps detailed in the thread above:
- SQL statements need the N prefix on all string assignments.
- cf templates need the BOM to work; utf-8 only is insufficient. Dreamweaver let you set this.
- Read files (eg cffile) with the charset="utf-8". Writing files with unicode filenames doesn't work with cffile - the BOM is not set this way. You will need an alternative, eg java file output, where you can insert the BOM.
@Dan,
Sorry that whole thing adds such overhead to the whole process. database is going away and the customer group wants the information transitioned to a database that can be queried from the Web. In my case my plan is to create data files from the current database. Then parse the files to eliminate all non-ASCII (Microsoft) characters. The purified files would then be loaded into the database we use with our Web Site.
Thank you also for the UTF-8 discussion, I will ensure that I do not fall into the multi-byte trap discussed above.
@Jacques,
Sounds like you'll have some fun scripting ahead of you! I'm glad that the UTF stuff is helping you out. I feel like I am finally getting better at that; but, it's still a bit dodgy in my mind.
Hey Ben,
I used the information from your post to write a sanitization function for a C# web service yesterday, but I was still having problems.
It turns out the entire range [\x00-\x7F] isn't safe. For example, a vertical tab (\x0B) causes all 3 XML parsers I tried (ColdFusion, Firefox, Chrome) to choke.
I modified the regex to only include the printable characters (\x20-\x7E), plus carriage return (\x0D) and horizontal tab (\x09), for a new regex of:
[^\x20-\x7E\x0D\x09]
So far, this has worked flawlessly for me.
@Adam,
Ah very nice! Silly vertical tab! Who uses those anyway :)
On an unrelated note, someone just pointed me to your Taffy framework post - looks like some very cool stuff. I'll be peeking at the implementation looking for some inspiration. using utf-8) and I thought I had this licked.
I'm using cf8 and oracle10g. Saving xml to nclob in an oracle table using cfqueryparam/sql_type_clob.
Whats happening is that sometimes my unicode chars are getting converted to ascii and I'm getting those nasty invalid xml character errors. I figured out whats going on but not why and how best to fix it.
If the total length of my xml is less that 4k the bad ascii conversion happens. Anything over 4k and it saves to the database correctly.
I played around with some explicit settings (as described in the above discussion) but it didn't help. The encoding seems to be correct up until the sql update. I assume there is some internal conversion/typing going on with cfqueryparam that is causing this but I haven't been able to find anyone else describing this same issue.
Anyone got any suggestions? The quick and cheesy fix is me appending a fake element of 3k to the xml before saving. That works! But obviously not the ideal solution.
Thanks
@Ken,
That's strange that it would depend on the length of the inserted data. I don't really know much about CLOB data types, but it sounds like something is either going wrong with the data param'ing or with the insert. I'm stumped.
@Ken,
Buffer expansion during inserts?
@Gareth, @ben, and @Ray Thanks for all your ideas.
I had to read (and manipulate) an html file that was had some special higher characters in it, like … and … and ? and - and " and ". (I thing some of the html was cut and paste from microsoft word.) I converted the html to cleaner xhtml using jtidy, then to xml so I could manipulate data easier. Using Coldfusion 8, xmlparse() did not like some of these higher characters (forgot the error). Using the information from this post, I came up with the following code. By posting it here, hopefully I can save someone else some time.
Ok, post did not take my special characters...
"special higher characters", like:
ldquo
rdquo
hellip
ndash
mdash
laquo
raquo
@Dangle,
How nice is it that there's a way to represent high-ascii characters with the &#ASC; notation. That has saved me a few times. Glad that this got you through the XML-parse stuff. I have, howver, run into experiences (I think) where the ampersand also needs to be escaped in these special character notations. Of course, I could very easily have been messing something else up. I can never remember all the XML-parsing rules.
I ran into that ampersand problem in converting html to xml also. However, as I read somewhere, the solution is to temporary change ampersands symbols to the allowed xml's "amperand a m p semicolon" before sending to xmlparse() using:
cfset beforeParseStr= Replace( htmlstr, "&", "&", "all")
THEN after done with xmlparse() and all manipulations, put ampersands back.
cfset afterParseStr= Replace(
ToString(XmlObj), "&", "&", "all")
PS: Ben, Thanks for the very useful site -- I use it alot. :-) Dan
The reason you're having difficulties with named entities like ’ not being recognized when parsing as XML is that unlike HTML, XML only comes with three built in named entities (<, >, and &) What you're doing is actually double-escaping those entities when you do ’. This probably works because you're effectively telling the XML parser that there's a literal character sequence: "’" instead of letting the XML parser handle the entity for you.
Like some other comments in this thread, this is also an encoding issue. ’ is a named entity which maps to Unicode character 2019 (aka U+2019). Semantically, ’ ’ and ’ are identical to each other. In fact if the XML document is set up to recognize the named entity, then in most XML parsers, once the document is parsed, it's impossible to know which entity form they used. It's also not possible to tell the difference between one of these, and a properly encoded representation (such as UTF-8 if that's what you're using).
It is totally possible to use traditional named entities in an XML document. There are two ways to do so, one is to inherit a DTD which defines those entities. For example: PUBLIC "-//W3C//ENTITIES Latin 1//EN//HTML"
Another way is to identify and define those entities you wish to use, and include them explicitly. I won't post the entire HTML named entity list here, it's somewhat long. If anyone would like it, send me an email and I'll send it to you when I can (I'm at Adobe MAX this week - if you're there and you see a bald guy with a beard, come say hi:).
Here's a short example of what it looks like to include the definition of ’ in your document (where RootNodeName is the name of your document element). You'll notice that rsquo is defined as *being* one of the alternate representations I mentioned above (’).
<!DOCTYPE RootNodeName [
<!ENTITY rsquo "’">
]>
The beauty of this approach is that you can declare your own named entities.
<!ENTITY eric "Eric, knower of character encoding">
In that XML document, if you typed: This guy sure thinks a lot of himself: "&eric;", it would parse as: This guy sure thinks a lot of himself: "Eric, knower of character encoding" There would in fact not be an easy way to return to the original shorter &eric; form.
We use XML and XSL to do page layout (XML is our model, and XSL is our view essentially), and this allows us to create short hand declarations for some complex entities. For example, on one site we use a small icon which represents a unit of measure. It comes with alt text, a little javascript for a tooltip, and so forth. Instead of having to type out the full markup for that each time (it's used a LOT), we just include &uom;, and it expands automatically. We also have a business requirement that all registered trademark entities are superscripted. So we declared ® to be <sup>®</sup>.
Oops, said "email me" and didn't give my address. mightye~gmail.com
Also, ® isn't self referential in our code like I said in my last paragraph, it's actually this:
<!ENTITY reg "<sup>®</sup>">
Don't know what would happen if you created a referential loop in entities like that. Probably would depend on the XML parser, but you'd either get a stack overflow, or the parser would handle it more gracefully and warn you of the referential loop.
Thanks Ben for the article and thanks to Adam for enhancing the function.
I have just used it in my code to handle/ make safe xml text for my XML document.
Thanks again.
Philip
@Eric,
That is some awesome stuff. I had no idea that "&" was so special in XML. I just always thought of it as something that messed up my parsing :) Very very cool stuff and thanks for the explanation.
@Philip,
Awesome my man :)
Hi Ben,
Adobe introduced XmlFormat(string) to escape all special characters including High Ascii one.
Is it safe to use the above function to make XML safe?
Thanks
Philip
@Philip,
There's a lot of really good discussion regarding the xmlFormat() method in the comments of this blog post:
Thanks Ben,
I will go through the post and comments.
Philip
Always, really allways when I'm looking for some CF stuff I'm redirected from Google to your site.
You're the man - keep em rockin!
@Charlie,
Thanks my man :)
Hey Ben,
Just thought you might want to know, it looks like this guy stole your article:
Thanks for the great info. I'm working on a ColdFusion Builder extension and testing with the high ASCIIvalues 8220 and 8221 (Microsoft "smart" quotes).
CF Builder seems to be botching those characters (they appear as question marks) before my handler page can do anything about them.
Dumping the XML variable (ideventinfo) in my handler (before any processing by the handler) gives me the question marks in place of the smart quotes. I don't see any way to set the encoding to remedy this, though.
Thanks Ben! This function solved my "search in database with special characters" problem.
@Andy,
Hmm, not sure what that other site is. I'm not gonna worry about it for now. As for encoding, there's a few places you can set encoding - with HTML Meta tags and with the CFProcessingDirective tag; perhaps throwing UTF-8 into one or both of those places will help with the question marks.
@Wouter,
Glad to help.
How do I take care of these pesky U+FFFF non-characters plaguing my XML? Your function apparently didn't cut it on those.
Correction: It does appear to have dealt with them.
Will this function work in CF7?
@Eric,
Many months later, I finally look into the XML ENTITY tag for the first time:
I only have the lightest-touch of understandings at this point; but, I am definitely reminded that XML is way more robust and powerful than I think I ever really give it credit for.
thanks! this even solved my php problem by using your given ascii range in preg_replace();. i tried other ways of removing non-standard ascii from input, but this one worked for me.
@vector, for PHP, I recommend looking into either iconv() or mb_convert_encoding(). For example:
Most browser-submitted content is going to originally be in ISO-8859-1, so the majority of the time using this will also get you correct results:
I recommend using UTF-8 for pretty much everything, from the data at rest in your database or filesystem, to the character encoding of your page output.
You can also use filter_vars if you just want to drop extended characters outright: "invalid JSON format" error if any of the data contains special characters.
It's probably considered bad practice by some, but we've globally sanitized data in Application.cfc's onRequestStart() method. We update the values of URL and FORM directly so that these values are sanitized for anything downstream which might want them.
We have the policy that anything that goes into the database must be UTF-8, if non-UTF-8 data gets in there, that's where the bug lies (rather than in the code which outputs non-UTF-8 data). It's just much harder to always sanitize outputs than to always sanitize inputs.
FWIW, the reason you get "invalid JSON format" is because the JSON spec specifies that data must be encoded as UTF-8. No other encoding is acceptable. For the most part, browsers in the US and Europe are submitting data in ISO-8859-1 (LATIN-1), so if you're using JSON, you need to be careful to be sure user input encoding is sanitized.
When I added the following code to onRequestStart in app.cfc, it stripped out the special characters:
This is using the aforementioned regex that Adam Tuttle came up with. Is this what you had in mind Eric? Or is there a better way to detect if something is not UTF-8 compliant? changed.
What's interesting is that we haven't done any of these things but with CF8 we seem to be handling international characters just fine?
Just on my own CF9 setup, to get ascii, unicode and html brackets sorted for XML, I took the following from the above:
I do not understand why unicode characters come back from Ben's function and the lone apostrophe gets sorted by xmlFormat function.
I added in sanskrit,chinese and arabic and Ben's cleanHighAscii() changes them all to unicode.
Look at the source code.
Hi Ben,
I am using CleanHighAscii function in my application. Its working great with the special characters issue but its resulting in cross-site scripting problem. I tried HTMLEditFormat(), to overcome the xss problem, while displaying but it displays the hex code in the front end. Please help me out on this issue.
Thanks & Regards,
Naresh. | https://www.bennadel.com/blog/1155-cleaning-high-ascii-values-for-web-safeness-in-coldfusion.htm | CC-MAIN-2020-40 | refinedweb | 8,488 | 61.77 |
On Thu, 03 Jan 2008 00:50:56 +0100 Michael Albinus <address@hidden> wrote: MA> Ted Zlatanov <address@hidden> writes: MA> I'm a little bit lost. Does it mean you don't want to offer *all* MA> messages in a mailbox? >> >> Correct, otherwise it's hard to handle invalid messages: are they >> invalid files or something else? I wanted also to treat duplicates as >> automatic backups but if you don't like that idea then I'll drop it. MA> Maybe we should see real life examples. I don't know whether it is MA> always good to present selected contents only. If there are technical MA> restrictions - that's another game. (I'm leaving Ding on the CC in case anyone has comments). Second question: is message 4 ignored? I would prefer to do so, to allow coexistence of tramp-imap.el with other messages (or even later versions of tramp-imap.el). I think it's fine to use message UIDs as the true file name, but we need to decide what to do in the cases above. Third question: namespaces. I feel that it's much better for the user to store all the files in a single mailbox: INBOX.storage holds /a/b/c, /d/e/f, and /g/h/i I believe you proposed that instead we should auto-create: INBOX.storage.a.b to hold /a/b/c INBOX.storage.d.e to hold /d/e/f INBOX.storage.g.h to hold /g/h/i Did I understand you correctly? Maybe we could do both, allowing for a "root mailbox" and a "root prefix". In the first case, those would be INBOX.storage / and in the second case, they would be INBOX.storage.a.b /a/b INBOX.storage.d.e /d/e INBOX.storage.g.h /g/h And maybe the user can configure those mappings exactly. I still think it's too complicated for 99% of the users and they'll never need more than one mailbox per virtual filesystem, but if you disagree I can do the extra work. I'll assume we've picked the single mailbox approach for the rest of the message. The implementations will change quite a bit as far as directories are concerned if we use multiple mailboxes. MA> As starting point you might look at tramp-smb.el or tramp-gw.el. Both MA> are addons, like tramp-imap.el could be. tramp-fish.el might be examined MA> as well, but this method isn't used anywhere I believe - it was merely a MA> proof of concept I didn't want to throw away. And I never ever got a bug MA> report about ... OK, I'll look. tramp-gw.el doesn't have any Emacs primitives implemented at first glance, so I'll look at tramp-smb.el which defines all those mappings nicely in tramp-smb-file-name-handler-alist. That list has quite a few methods; are they all necessary or do the default handlers for some of them suffice?? add-name-to-file: could be a special "link message" or just a copy, like in Windows copy-file: implemented as an APPEND delete-directory: implemented with a search+delete for all matching messages delete-file: search+delete of all matching messages directory-file-name: tramp-handle-directory-file-name? directory-files: search for matches directory-files-and-attributes: search for matches, attributes always 777 dired-call-process: ignore dired-compress-file: ignore file-accessible-directory-p: always t file-attributes: always 777 file-directory-p: needs a search, but we could have a file name that conflicts with a directory name file-executable-p: always nil file-exists-p: needs a search file-local-copy: ? file-remote-p: tramp-handle-file-remote-p file-modes: tramp-handle-file-modes or hard-code file-readable-p: always t file-regular-p: always t file-symlink-p: always nil file-truename: returns UID file-writable-p: always t find-backup-file-name: we need to decide insert-directory: ? insert-file-contents: search+retrieve shell-command: ignore? substitute-in-file-name: ? unhandled-file-name-directory: tramp-handle-unhandled-file-name-directory? vc-registered: always nil verify-visited-file-modtime: ? write-region: needs to do an append+delete of original+backups as needed; IMAP can't rewrite a message Thanks Ted | https://lists.gnu.org/r/tramp-devel/2008-01/msg00011.html | CC-MAIN-2021-43 | refinedweb | 729 | 58.08 |
IRC log of xhtml on 2008-02-19
Timestamps are in UTC.
08:09:44 [RRSAgent]
RRSAgent has joined #xhtml
08:09:45 [RRSAgent]
logging to
08:10:01 [Steven]
rrsagent, make log public
08:10:22 [Steven]
Meeting: XHTML2 WG FtF, Venice, Italy, Day 2
08:10:35 [Steven]
Chair: Roland, STeven
08:10:39 [Steven]
Scribe: Steven
08:11:45 [Steven]
Agenda:
08:12:42 [Steven]
.me 12.30ish
08:19:03 [Steven]
yep
08:27:11 [yamx]
yamx has joined #xhtml
08:27:34 [alessio]
alessio has joined #xhtml
08:30:10 [Steven]
rrsagent, make minutes
08:30:10 [RRSAgent]
I have made the request to generate
Steven
08:30:55 [Steven]
s/ST/St/
08:31:23 [Steven]
Present: Roland, Alessio, Rich, Yam, Steven, Shane
08:32:18 [oedipus]
present+ Gregory
08:33:27 [Steven]
I was just about to thank you
08:33:36 [oedipus]
no need
08:41:22 [Steven]
Topic: Frames (brief return)
08:41:35 [Steven]
Scribe: Yam
08:41:43 [Steven]
scribenick: yamx
08:42:17 [yamx]
Steven: Brat will review M12N modularization transition request.
08:42:57 [yamx]
Steven: Brat asked about XHTMLBasic 1.1 (but it is not delayed from M12N from our perspective).
08:43:14 [yamx]
s/from M12N/by M12N/
08:44:09 [Steven]
s/Frames/Role/
08:44:28 [OedipusWrecked]
OedipusWrecked has joined #xhtml
08:44:32 [yamx]
Shane: an issue for Role.
08:45:04 [yamx]
Shane: I would like to capture the resolution correctly.
08:46:07 [yamx]
Rolan: It SHOULD have dereferenceable, and one of them should be RDF.
08:46:14 [yamx]
s/Rolan/Roland/
08:46:51 [yamx]
Steven: we added "some is beyond the scope of role (it is vocab issue)".
08:46:57 [yamx]
Roland: Another vocabulary.
08:47:25 [yamx]
Roland: yesterday of action is action for Roland to reply to the guy raising the original issue.
08:47:41 [yamx]
Roland: Leave the action for Roland.
08:48:11 [yamx]
Roland: we talked xFrames, equivalent iFrame.
08:48:43 [yamx]
Topic: Yesterday refresh (xFrames)
08:50:19 [Steven]
<frame xml:
08:50:22 [yamx]
Steven: Populating URI with ids, and extends to XHMTL2.
08:50:30 [Steven]
home.xframes#frames(one=a.xhtml,
08:51:03 [Steven]
So we could say that this works for XHTML2 as well
08:51:06 [yamx]
Roland: We will retain the notion of subclases of object like iFrame, src value matching.
08:51:18 [Steven]
and the URI replaces the src attribute
08:51:37 [Steven]
on the element that matches the id
08:51:59 [yamx]
Roland: Could we do the sme thing to document?
08:52:41 [yamx]
Steven: question is matching with id, not match src attribute, do we create one?
08:53:22 [yamx]
Steven: Replacing can lead to some unintended security risk.. just thinking...
08:54:45 [yamx]
Roland: thinking about killing three birds in one stone, bookmarking, frame flow, ...
08:55:01 [yamx]
Steven: going for some flipchart to think about it.
08:55:57 [yamx]
(Steven went out for flipchart)
08:56:18 [yamx]
s/going/I am going/
08:56:37 [yamx]
s/for/, seeking for/
08:57:07 [yamx]
(Steven returned with a flipchart.)
08:57:46 [yamx]
Steven: [[summarizing frame requirements]]
08:58:57 [yamx]
Roland: maybe some attributes to distinguish "replace-able", "bookmark-able" srcs.
08:59:51 [yamx]
Shane: it may resolve many comments we received about people's concerns what content can be covered.
09:00:33 [yamx]
Roland: "static" and "iframe-type-src (replace-able)".
09:02:09 [yamx]
Roland: Ability to change (bookmark, if changed, it is reflected as URL).
09:02:46 [yamx]
Rich: changing the value will automatically refresh?
09:03:07 [yamx]
Rich: We have to think about scripting environment.
09:03:29 [yamx]
Roland: Does the page automatically reloaded when managed by scripts?
09:03:37 [yamx]
s/Roland/Rich/
09:04:12 [yamx]
Roland: Default action is reload, but it can be cancellable.
09:04:41 [yamx]
Roland described body-object-iframe on flipchart.
09:05:03 [yamx]
<body>
09:05:15 [yamx]
<object src="myStuff.smil" />
09:05:30 [yamx]
<iframe id="f2" srcc="myStuff.html" />
09:05:32 [yamx]
</body>
09:05:46 [yamx]
s/srcc=/src=/
09:06:23 [yamx](f2=myothersutff.html,...
)
09:07:02 [yamx]
Roland: bookmark will go to "myStuff.html".
09:07:35 [yamx]
Steven:
"src(f2=myotherstuff.html, ..); Roland agreed.
09:07:57 [yamx]
s/"src/#src/
09:08:56 [yamx]
Steven: the resulting URl is not changed from <object> use.
09:09:27 [yamx]
Roland: the first one can be <div> (not only <object>).
09:10:31 [yamx]
Roland: "target" as "iframe"...
09:11:19 [yamx]
Steven: Frame does not have "id", it is not bookmarkable.
09:12:03 [yamx]
Roland: bookmarkability, inline capability, fall-back capability, flow, ...
09:12:29 [yamx]
Rich: some next step issues, widgets, ...
09:13:20 [yamx]
(Pens arrived for scribing flipcharts.)
09:13:40 [ShaneM]
(Prior to this scribing in blood - dedicated group!)
09:13:58 [yamx]
(We are blood-tied-group)
09:15:36 [Roland_]
Roland_ has joined #xhtml
09:15:39 [Roland_](f2=myotherstuff.html,f1=img.jpg...
)
09:15:39 [Roland_]
<body>
09:15:39 [Roland_]
<div id="f1" role="navigation" src="nav.html" />
09:15:41 [Roland_]
<iframe id="f2" role="main" src="myStuff.html" />
09:15:43 [Roland_]
</body>
09:16:11 [Steven]
Rules: If there is no @src, then there is no assignment
09:16:35 [Steven]
... changes in an iframe get reflected in the top-level URL
09:18:38 [yamx]
Roland: We will think if we come down anything more this afternoon.
09:19:07 [Steven]
Shane: We need to run this past Mark who has a similar mechanism in his implementation
09:19:54 [yamx]
Roland: Id, embed content, iframe content itself id duplicate, any qualifying identification. Things arise..
09:25:45 [yamx]
Steven: talking with Shane to clarifying document-load-event (image loading is not complete).
09:26:38 [Steven]
The question is: is the *document* ready after or before all the <div src=..>s are ready
09:26:51 [Steven]
DOes it act like img does now, or not
09:27:13 [Steven]
s/DO/Do/
09:27:25 [oedipus]
shouldn't it act as OBJECT was intended?
09:27:26 [yamx]
Shane: Scripting should be executed inline?
09:27:52 [yamx]
Steven: Declarative.
09:28:16 [yamx]
Steven: Probably executed, but not during parsing.
09:28:38 [Steven]
s/executed/executed in document order/
09:29:02 [Steven]
SHane think that document ready occurs *after* all the sub documents are ready
09:29:18 [Steven]
.... they are very different from images, which do not have presence in the DOM
09:29:25 [Steven]
s/SH/Sh/
09:30:49 [Steven]
Alessio: I just tried some iframes in current borwsers and the dub documents in an iframe get executed script and all as you would expect
09:30:59 [Steven]
s/bor/bro/
09:31:09 [Steven]
s/dub/sub/
09:31:43 [ShaneM]
When they are executed are the executed in the context of the parent document? Or in their own context? From a javascript security perspective I mean?
09:33:18 [Steven]
good question
09:33:25 [yamx]
Roland: IT is something we have to sort out.
09:33:33 [yamx]
s/IT/It/
09:33:53 [alessio]
I'm trying to do it in the parent document now, shane
09:34:17 [alessio]
but the security issues remain, naturally
09:35:01 [yamx]
(coffee break, a point of success.)
09:35:49 [yamx]
Roland: next agenda after coffee break will be "longdesc".
09:36:23 [Steven]
we are off to the cafe for a coffee Gregory
09:45:01 [Lachy]
Lachy has joined #xhtml
09:56:51 [oedipus]
thanks steve
10:00:31 [oedipus]
Equivalent Content for Images (HTML wiki page by GJR):
10:06:57 [oedipus]
there is a lot of material related to short and long descriptors (PF asked HTML WG for retention of longdesc or a superior mechanism) in the Equivalent Content for Images (HTML wiki page by GJR):
10:07:00 [Steven]
Did you write all that page Gregory?
10:07:22 [oedipus]
i wrote most of it, and then added comments and proposals by others, but, yeah, that's my baby
10:08:04 [Steven]
Do you list Google (or search engines) as a consumer of alternate content?
10:08:56 [oedipus]
one thing that is missing is use of a tree (expand/collapse all) or accordion (expand/collapse individual branches) widget as a long descriptor providing the user with the ability to follow all or selected branches -- gives the equivalent "eureka!" moment that sighted users get from schematics, flow charts, tree grids, etc.
10:09:05 [Roland_]
Roland_ has joined #xhtml
10:09:23 [Rich]
Rich has joined #xhtml
10:09:39 [oedipus]
steven, yes, there is discussion of search engines and flickr-type sites on the page (it is quite a long page)
10:10:19 [Steven]
I personally believe that the only solution is to have the longdesc content in the page, near the equivalent object, otherwise authors will never provide it, nor update it
10:11:32 [oedipus]
that should be up to the user -- display inline, display as block/iframe/object, display in parallell -- there are a lot of use cases for side-by-side description/image couplets as there is for a linked descriptor (which could always be yanked into a document instance)
10:12:07 [yamx]
yamx has joined #xhtml
10:12:13 [oedipus]
long ugly wiki uri on exposition options:
10:12:14 [oedipus]
10:12:22 [Steven]
The reason that longdesc has failed to date is that it is too difficult to do, keeping parallel documents
10:12:57 [oedipus]
that and a lack of support and freaky implementation -- JAWS opens longdesc in a new pop-up window, which pop-up blockers block!
10:13:14 [Steven]
The HTML5 groups has the disadvantage of having to be backwards compatible
10:13:39 [Steven]
We can adopt better solutions, for instance by allowing img to have content
10:13:45 [Steven]
s/groups/group/
10:13:56 [oedipus]
true, but the underlying issues remain -- that's what the esw wiki page is an attempt to do -- outline the needs/use cases and come up with a better mousetrap
10:14:31 [oedipus]
IMG as a container would be a consummation devoutly to be wished
10:14:49 [oedipus]
OBJECT is my preferred solution, only it is broken
10:15:17 [Steven]
Actually in XHTML2 all elements behave like object.
10:15:30 [oedipus]
that's why i joined the WG!!!
10:15:31 [Steven]
<div src="whatever">fallback<.div>
10:15:45 [Steven]
good on yer Gergory!
10:15:51 [oedipus]
thanks
10:16:27 [oedipus]
the name of the wiki page is a relic of issues past -- it should be "Long and Short Descriptors for Static Images" or some such
10:18:38 [oedipus]
to quote myself from the page:
10:18:40 [oedipus]
10:18:40 [oedipus]
* expose in new browser instance
10:18:40 [oedipus]
* expose in new browser tab
10:18:41 [oedipus]
* expose inline (insert content as object)
10:18:43 [oedipus]
* expose inline through the use of IFrame
10:18:45 [oedipus]
* expose the contents of the longdesc document in a side-bar,
10:18:47 [oedipus]
aligned with the image it describes
10:18:49 [oedipus]
and there are many other options, provided a user knows what to do when encountering a long description, then it matters not what assisstive technology she is using, for there is an expected action in the case of browser x for exposing LONGDESC
10:20:01 [oedipus]
GJR notes that some of the best examples of LONGDESC are in CSS 2.0 -- 46 longdescs in all!
10:20:18 [yamx]
Shane: "longdesc" attribute is available even when src does not fail.
10:20:54 [oedipus]
right: it isn't necessariliy an either/or proposition -- some user groups need guidance through an image
10:20:55 [yamx]
Steven: "longdesc" is the content of element, we don't need "longdesc" attr.
10:22:00 [Steven]
10:22:06 [oedipus]
GJR: point of reference -- are we discussing M12N section 5.7 "Image Module"
10:22:13 [yamx]
Shane: once resource is obtained, the rest of content is not processed.
10:22:23 [yamx]
Steven: a question: not processed or not presented?.
10:22:35 [ShaneM]
we say: If accessing the remote resource fails, for whatever reason (network unavailable, no resource available at the URI given, inability of the user agent to process the type of resource) or an associated ismap attribute fails, the content of the element must be processed instead.
10:23:44 [ShaneM]
10:24:15 [yamx]
Shane: I did not look into public version.
10:25:52 [yamx]
Shane: Issue persists. What happens content nested after a document is resolved?
10:26:00 [Steven]
I think that the alternate text is in the DOM
10:26:11 [Steven]
... I think we can define the behaviour with CSS
10:26:26 [Steven]
.... in fact I'm sure that Jonny Axellsson once demonstrated that
10:26:33 [oedipus]
it should be in the DOM so it can be reused/accessed by assistive technology/regular users
10:26:33 [Steven]
s/ll/l/
10:27:12 [Steven]
ACTION: Steven to add text to embedding module saying how it works
10:29:27 [Steven]
Rich says we need some events that say what happens
10:29:35 [Steven]
... load when a src works I think
10:29:47 [Steven]
s/load/'load'/
10:30:37 [Steven]
Steven: What should you get if the src fails though?
10:31:11 [oedipus]
GJR: need to ensure that if user needs side-by-side image and descriptor to process the image (limited viewport, congnative issues, etc.) it is readily and easily available
10:32:01 [yamx]
Steven: reasons for src failing: (1) network down, (2) 404 or similar.
10:34:11 [yamx]
Steven: reviewed error code 406 for HTTP.
10:35:05 [yamx]
Roland: you did not get any event in image load finished.
10:36:15 [yamx]
Roland: DOM3 have anything different.
10:37:33 [yamx]
s/have/don't have/
10:38:03 [yamx]
Steven: DOM went to WebAPI, Rich.
10:39:01 [yamx]
Stenven: DOM3 said images are loaded before you get load event.
10:39:10 [Steven]
"The DOM Implementation finishes loading the resource (such as the document) and any dependent resources (such as images, style sheets, or scripts). "
10:39:18 [Steven]
10:40:17 [Steven]
Event "error": A resource failed to load, or has been loaded but cannot be interpreted according to its semantics such as an invalid image, a script execution error, or non-well-formed XML.
10:41:43 [yamx]
Roland: Let's see new charter for WebAPI to check DOM3s...
10:42:05 [Steven]
10:42:09 [Steven]
minutes of a recent call
10:42:47 [Steven]
Zakim: leaving. As of this point the attendees were Carmelo, Andrew_Emmons, anne, shepazu
10:45:36 [yamx]
Shane: DOM3 Events use qnames?
10:46:53 [yamx]
Roland: no search result for qname.
10:47:01 [yamx]
Steven: BUt they have namespaced events.
10:47:10 [Steven]
s/BU/Bu/
10:48:49 [yamx]
Roland and Steven: We don't need "longdesc" attr. Stenven will add clarifying text for embedded context. (already ACTION for Steven)
10:49:07 [yamx]
s/Stenven/Steven/
10:49:52 [yamx]
Roland: alternative to src.
10:50:13 [yamx]
Steven: Added (3) images switched off, at flipchart.
10:51:13 [yamx]
.. after (2) 404/406 or similar.
10:54:52 [Steven]
In css, you say img[src] {content: attr(src)}
10:55:29 [Steven]
so in principle you can switch between displaying the image and the content
10:55:51 [oedipus]
GJR: that "principle" needs to be explicitly stated
10:55:52 [Steven]
(but the CSS doesn't give you control over @srctype)
10:56:28 [Steven]
In any case the alternate content is in the dom, so is available for use as necessary
10:56:45 [oedipus]
amen
10:57:26 [yamx]
Steven: poping stack for a few levels, back to "longdesc".
10:57:56 [yamx]
Roland: how we can style both?
10:58:58 [yamx]
Steven: in CSS, content: means replacement with external resource. (Stven went to flipchart to argue)
10:59:10 [yamx]
s/Stven/Steven/
11:02:34 [yamx]
Roland: CSS cannot do fallback at failure.
11:03:06 [yamx]
Steven: *[src]:error, you can invent pseudo-class in that case.
11:03:40 [yamx]
Steven: We don't know it is allowed, but let's assume it allowed...
11:04:13 [yamx]
Steven: *[src]:before
11:04:27 [yamx]
Steven: {content: attr(src)}
11:04:41 [yamx]
Steven: *[src]{display:none}
11:04:55 [yamx]
Steven: body.nosrc *[src] {display:blank}
11:05:07 [yamx]
Steven: body.nosrc *[src]:before
11:05:14 [yamx]
Steven: {content: ""}
11:05:26 [yamx]
Steven: *[src]:error
11:06:18 [oedipus]
steven, will that {content: "foo";} make it into the DOM? currently, CSS-generated text isn't in the DOM and isn't accessible to assistive tech
11:06:19 [yamx]
Steven described an error case with src="foo.xdiv" srctype="vide/xdiv"
11:09:19 [oedipus]
UAAG (user agent accessibility guidelines WG) is trying to address CSS- and script-generated text, UAAG2 has a proposed requirement that ALL text, no matter what its source, must be made available via the DOM or directly to an accessibility API (such as MSAA, IAccessible2, ATK/AT-SPI, etc.
11:12:33 [yamx]
Roland: we will talk XML Event 2 after lunch break.
11:12:56 [yamx]
s/Event 2/Events 2/
11:13:54 [oedipus]
when will we resume? how much time are you breaking for lunch?
11:14:45 [Steven]
s/display:blank/display:block/
11:15:32 [Steven]
Gregory, I expect lunch will be an hour
11:16:43 [oedipus]
thanks - now if only i had a left-over canoli (sp?) to eat for breakfast...
11:16:52 [Steven]
Gregory, wrt your content: question
11:17:03 [oedipus]
yeah
11:17:10 [Steven]
I didn't put any real content there
11:17:16 [Steven]
in one case it is the embedded image
11:17:40 [Steven]
in the other case, the embedded image is overwritten with 'nothing' (empty string)
11:17:46 [Steven]
the dom remains the same in both cases
11:17:55 [Steven]
(I think)
11:18:18 [Steven]
and the DOM contains the URL of the image, and the alternate content, both of which reside in the DOM without change
11:18:33 [oedipus]
do you think the UAAG2 req unreasonable?
11:18:40 [Steven]
and so is accessible to any software that wants to make use of it
11:18:47 [oedipus]
ok
11:19:14 [Steven]
I have no problem with the UAAG2 requirement
11:19:26 [oedipus]
that's certainly good to hear
11:19:40 [Steven]
In fact it is a pain when you copy a numbered list, and don't get the numbers in the copy buffer
11:19:40 [oedipus]
the tricky bit is wording it correctly
11:19:55 [Steven]
because of the generated content problem
11:20:26 [yamx]
Rich: Guideline for browser for markup?
11:20:30 [oedipus]
if CSS is used to control list styling, one doesn't get that info from an assistive tech - it uses the "dumb" nesting default algorithm
11:20:45 [yamx]
Shane: we provided default CSS for XHMTL2.
11:20:52 [Steven]
But for instance Gergory, XForms which also generates content, amongst other places via repeats talks of a shadom tree (or DOM)
11:20:58 [Steven]
so you have two DOMs
11:21:17 [oedipus]
that's the tricky bit UAAG is trying to deal with -- multiple DOMs
11:21:18 [yamx]
Shane: Browser people complained "don't constrain browsers".
11:21:20 [Steven]
s/Ger/Gre/
11:21:55 [oedipus]
browser people are always complaining -- just like their users!!!
11:22:59 [Steven]
s/shadom/shadow/
11:23:16 [oedipus]
steven, your point about copying content with CSS-generated text is a mirror of the AT user's problem, only it is constant, not situational
11:24:18 [yamx]
Shane: I have an action item to turn OWL into RDFa.
11:24:27 [yamx]
.. at the end of role spec.
11:24:33 [Steven]
ack that Gregory
11:25:23 [oedipus]
rich, i'm not sure i understand your question -- UAAG guideline would be markup agnostic -- if something's writing to the visual palette, capture it in the DOM or expose it directly to an accessibility API is what UAAG is discussing
11:26:03 [yamx]
Steven: at the end of vocab, RDFa is readable for human and machine, everyone happy.
11:26:18 [ShaneM]
Creative Commons stuff in RDFa:
11:26:59 [yamx]
Lunch break.
11:27:37 [Steven]
Pity that they use the wrong DOCTYPE. Otherwise COOL!
11:28:43 [yamx]
(we really go to lunch, now...)
11:28:45 [oedipus]
goda del pranzo, tutto
11:29:07 [ShaneM]
ShaneM has left #xhtml
12:11:49 [oedipus]
oedipus has joined #xhtml
12:21:45 [ShaneM]
ShaneM has joined #xhtml
12:24:22 [OedipusWrecked]
OedipusWrecked has joined #xhtml
12:54:10 [myakura]
myakura has joined #xhtml
13:16:17 [Steven]
back finally
13:16:30 [Steven]
sorry, slow service
13:16:38 [oedipus]
13:19:12 [yamx]
yamx has joined #xhtml
13:19:21 [yamx]
(People are back from Lunch)
13:22:46 [Steven]
rrsagent, make minutes
13:22:46 [RRSAgent]
I have made the request to generate
Steven
13:22:59 [Steven]
Present+Simone Onofri
13:24:28 [Steven]
s/Brat/Bratt/G
13:24:50 [ShaneM]
Mark sent belated regrets for today. I have not yet determined if he might be available tomorrow.
13:25:30 [oedipus]
Shane, thou art the bearer of bad news, but since i took a vow not to shoot the messenger...
13:26:17 [Steven]
RDFa is rather eating up his time
13:27:04 [Steven]
shane, ringing
13:27:05 [oedipus]
but it tastes SO GOOD
13:27:50 [Steven]
shane? ShaneM?
13:28:40 [Steven]
Simone is an IWA member
13:28:49 [Steven]
... and a member of SWD
13:29:21 [Steven]
... and is involved with RDFa
13:29:57 [Steven]
... attends the RDFa calls
13:30:39 [yamx]
Topic: XML Events 2
13:30:53 [oedipus]
q+ to ask question via IRC
13:30:54 [Roland_]
13:31:07 [Steven]
ack oedipus
13:31:09 [oedipus]
i know what the WG has said, and what Section 3.3. "Attaching Attributes Directly to the Handler Element" says, but isn't there still a need for a "purpose" element/property for event handler? shouldn't that be part of the core architecture? it is needed for, amongst other things, providing a foundation on which to build "expert handlers for specialized markup" for assistive technologies:
13:31:09 [oedipus]
13:31:09 [oedipus]
and
13:31:09 [oedipus]
(don't let the domain name fool you -- this is a proposal from the Open Accessibility Workgroup formerly of the FreeStandards Group (FSG) which is platform agnostic
13:31:38 [Simone]
Simone has joined #xhtml
13:32:43 [ShaneM]
cant we just use "role" for that Gregory?
13:33:12 [oedipus]
thinking...
13:33:17 [oedipus]
RichS?
13:33:28 [yamx]
Roland: I have a suggestion, found in a thread...
13:33:33 [Roland_]
I commented on this topic a while ago:
13:33:48 [oedipus]
thanks for pointer - am checking
13:35:01 [oedipus]
roland, i am intrigued by your ideas, and would like to subscribe to your newsletter...
13:35:16 [Steven]
Another two long pages for us to digest Gregory :-)
13:35:23 [oedipus]
seriously, you raise an excellent point
13:35:33 [oedipus]
steven, i guess i speed listen
13:36:08 [yamx]
Shane: Purpose disapeared thanks to "role".
13:36:57 [yamx]
Roland: adding info for human is complementary.
13:37:38 [oedipus]
but, shane, isn't there a case for defining a specific role for the task? roland's example of a <hint> is a good example of author abuse, like saying "go get a real browser" in a NOFRAMES (sorry roland)
13:38:09 [oedipus]
it doesn't tell anything human or machine parseable, is what i mean
13:38:40 [ShaneM]
well - @role should be machine parseable. human readable is another, potentially important area
13:39:24 [Rich]
Rich has joined #xhtml
13:39:32 [Steven]
Gregory, how do you see the purpose element being used in accessible software?
13:40:55 [oedipus]
the expert handler will probably be an ontological interpreter that facillitates read/write access for users of AT (assistive tech) - it needs to communicate to the AT info that allows for meaningful 2-way communication with specialized markup languages, so that each AT doesn't have to implement specific handlers for specific SMLs (specialized markup languages)
13:41:42 [oedipus]
it is specialized middle-ware that will rely on OWL or RDF, most likely (er, if i had my way)
13:42:13 [Steven]
So if a handler had a label element (for instance), that briefly described the purpose, is that enough?
13:42:21 [oedipus]
yes
13:42:25 [oedipus]
what does rich think?
13:42:37 [Steven]
He just said "That'll work"
13:42:45 [oedipus]
i think so to
13:42:49 [oedipus]
s/to/too
13:43:09 [yamx]
Roland: talked about xForms, hint, label, and role.
13:43:57 [oedipus]
problem with assistive tech is that everything is done through keyboard overlays -- case in point -- i have to go into "table navigation" mode to make sense of anything in a TABLE, but when i ReadAll, the table is read from top lefthand corner to bottom righthand corner, forcing me to stop at EVERY table in a document if i am to understand what probably should have been in a DL in the first place
13:45:14 [oedipus]
it is even worse when there is an HTML form in the table, as FIELDSET, LEGEND, and LABEL aren't legal in HTML4x/XHTML1.0 tables
13:46:47 [yamx]
Shane:table discussion does not relate to event handler.
13:46:48 [oedipus]
there is a separate "forms mode" which doesn't tell you anything about the table containing the form, so one has to switch to "table navigation" which means one no longer knows what state (or even what INPUT type) the form control he/she is attempting to contextualize
13:47:08 [oedipus]
ok, i'll get off my high horse, shane...
13:47:30 [Steven]
(Shane's comment was an explanation, not a complaint)
13:48:04 [oedipus]
should have added a <wink>
13:48:39 [yamx]
Shane: separate module, scope event, qname in DOM3. that leading to this, Roland.
13:49:13 [yamx]
Roland: DOM Level 2 interface, rather than Level 3.
13:49:42 [oedipus]
GJR: just want to re-iterate that i am open to a label or some other named element that briefly described the purpose of the handler
13:51:32 [yamx]
Steven: We originally put script only when it does not support following name spaces.
13:51:53 [yamx]
.. for legacy cases.
13:52:12 [ShaneM]
This might be useful for understanding what is different:
13:52:20 [yamx]
(Steven wrote an example for script element in a flipchart.)
13:52:44 [oedipus]
yam, you are doing an EXCELLENT job of minuting
13:53:16 [yamx]
thank you.
13:53:28 [oedipus]
thank you!
13:53:57 [yamx]
<script src="xhtml.js" type="...." implements="Ns URI" />
13:54:26 [yamx]
Steven: should we put into XHMTL2 or hander spec?
13:55:02 [oedipus]
can i raise both hands?
13:55:09 [Steven]
So I propose adding in XML Events 2 <script src="xforms.js" type="..." implements="... NS URI ..." />
13:55:18 [yamx]
s/XHMTL2/XHTML2/
13:55:34 [oedipus]
i think that would work, steven
13:56:03 [Steven]
@implements tells the system that if they have an implementation of that NS, ignore this scrpt
13:57:32 [Steven]
Shane: We can also do that with an @if that checks HASFEATURE
13:57:56 [Steven]
... so @implements is then a shorthand
13:58:25 [gshults]
gshults has joined #xhtml
13:58:47 [Steven]
present+Gerrie
13:58:54 [oedipus]
that makes sense to me (which should alarm you, steven!)
13:58:55 [gshults]
Good morning
13:59:00 [yamx]
Roland: this XML Events 2 defines two modules (events, handler).
13:59:10 [Steven]
Shane will skype you Gerrie
13:59:15 [Steven]
to save bandwidth at this end
13:59:17 [gshults]
OK
13:59:41 [oedipus]
s/steven!/shane!
14:00:16 [Steven]
Yam: I have a minor comment
14:01:25 [yamx]
Yam: new version of HTML in "introduction" is umbiquious now. To XHTML.
14:01:26 [Steven]
... we should s/a new version of HTML/a new version of XHTML/ in the intro
14:02:56 [yamx]
Yam: All XML applications in W3C follows this "XML Event 2"?
14:03:09 [yamx]
Steven: Yes.
14:03:38 [yamx]
s/Yes./Probably./
14:03:58 [Steven]
Not all XML applications use it, though they all could (if they use DOM Events)
14:04:50 [yamx]
Steven: In the past, some groups mentioned they had some special requirements, but no, they had a same problem.
14:06:34 [yamx]
Roland: Introduction needs updating, covering handerls.
14:06:38 [Steven]
... which was that parts of the DOM tree can be unavailable at times
14:06:43 [yamx]
s/handerls/handlers/
14:06:48 [ShaneM]
ACTION: Shane to update intro to XML Events to reflect its current scope of Handlers and Events.
14:06:59 [Steven]
... and when the part of the tree is unavailable, they musn't generate events, nor handle them
14:08:22 [yamx]
Steven: XHTML2 says that it uses XML Events 2. XML Events 2 will decide whether DOM2 or DOM3 event.
14:09:30 [yamx]
Roland: fall-back is in XHTML2.
14:10:02 [yamx]
Roland: DOM2 did not have load error.
14:10:18 [Steven]
It sounds like we need DOM3 events since it has the new-style load event, and the error event
14:10:54 [yamx]
Roland: how about just put a note in XHTML2.
14:11:47 [yamx]
Roland: put a note in XML Event 2.
14:12:41 [yamx]
Shane: It says DOM2 and qname in the current spec.
14:13:03 [yamx]
Steven: last week, Form group met.
14:13:15 [yamx]
... hopefully seriously going, now.
14:13:38 [yamx]
Roland: propose to switch for DOM 3 Event.
14:15:00 [yamx]
Roland: let's refer to the last published.
14:15:17 [yamx]
RESOLUTION: XML Event 2 will be based on DOM 3 Event.
14:15:26 [yamx]
s/Event/Events/
14:17:10 [yamx]
Shane: the current DOM 3 spec has how to map qname in DOM2 compatible.
14:18:57 [yamx]
ACTION: Shane to update "Introduction"(HTML->XHTML) and references to DOM3.
14:19:56 [yamx]
Shane: XML Event 1 shortname will result in the same doc..
14:20:42 [yamx]
Roland: no suffix, XML Event shortname.
14:21:46 [yamx]
Roland: green part, chameleon namespace at the end of 2.1 Document Conformance.
14:22:49 [yamx]
Shane: conceptually supercedes any markuplanguage. implementers can magically deal with. That is an issue.
14:23:03 [yamx]
s/with./with?/
14:23:51 [yamx]
Shane: It is Mark's issue, he is not here.
14:24:09 [yamx]
s/markuplanguage/markup language/
14:26:10 [yamx]
Roland: regardless of markup language, general processor does. built-in does.
14:26:39 [yamx]
Roland: we are not sure about what the implications are for implementors.
14:27:45 [yamx]
Roland: do we have proforma text, highlighting features for asertions?
14:27:52 [yamx]
s/asertions/assertions/
14:28:40 [yamx]
Roland: machine extracting assertiosn for conformance testing, it is not available now.
14:29:44 [yamx]
Roland: proforma text, it will be dealt separately.
14:29:48 [ShaneM]
new shortname should be xml-events2 whilst we are developing.
14:32:21 [yamx]
Shane: we need xml id in an explicit way? Steven?
14:32:48 [yamx]
Steven: We definitely need it.
14:34:27 [yamx]
Roland: default action, can we stop it? decision in listner?
14:34:50 [yamx]
Shane: we have it in handler section.
14:35:26 [yamx]
Roland: precedence perform and prevent?
14:35:43 [yamx]
s/precedence/precedence issue,/
14:36:29 [Steven]
We recommend this syntax for latest version URIs:
14:36:40 [Steven]
When a Working Group follows this scheme, Director approval of short names is not required; the Communications Team can allocate them (provided they are reasonable, not offensive, etc.).
14:36:48 [Steven]
14:37:07 [yamx]
Steven: it is XPath 1.
14:39:38 [ShaneM]
Note that we explicitly only require a subset of XPath semantics: XML Events XPath expressions have no context node, and so the context position is 0 and the context size is 0. There are no variable bindings, and the function library contains the functions described below. It is not necessary to provide namespace declarations.
14:40:42 [yamx]
Shane: 3.1 Listner element, there is Editor's note, which we deferred.
14:41:00 [yamx]
s/Listner/Listener/
14:41:20 [yamx]
Roland: URI is in the universe, security risk.
14:42:30 [yamx]
Roland: keep id, for compatibility. Introducing URL later.
14:43:38 [yamx]
Roland: Dispatcher element has target child attribute.
14:45:01 [yamx]
Shane: we defined them in global for target.
14:45:07 [yamx]
(Steven just came back)
14:45:56 [yamx]
Shane: observer can be URI. it is an issue.
14:46:06 [yamx]
s/URI./URI?/
14:46:59 [yamx]
Roland: I proposed we create observeuri if we need one.
14:47:10 [yamx]
s/uri/uri attribute/
14:47:24 [yamx]
s/one/one, later/
14:47:49 [yamx]
Roland: I propse the current Editor's note.
14:47:56 [yamx]
s/propse/propose/
14:48:09 [yamx]
s/the current/to remove the current/
14:48:22 [yamx]
Roland: I propose the text as it is (with removing Editor's Note).
14:49:06 [yamx]
Roland: while.
14:49:07 [ShaneM]
@ev:while has this note: EDITORS' NOTE: Can't think of an example that only makes use of what we have in this spec, i.e., the event() function. We may need to do something like delete a list in XForms.
14:49:28 [yamx]
Steven: Original use case is to repeat it until some condition is satisfied, in XForms.
14:50:22 [yamx]
(Steven went to a flipchart to describe an example for Roland.)
14:51:04 [yamx]
(delete all sort of dates, if they exists out of date nodes...)
14:51:47 [yamx]
<trigger ev:event="DOMActivate"
14:51:56 [yamx]
<label< delete all sort of datels </label>
14:52:16 [yamx]
<actionwhile="exists(outdeated)" <delete node>
14:52:21 [yamx]
</action>
14:52:32 [yamx]
s/<actionwhile/<acton while/
14:54:12 [yamx]
s/deated/dated/
14:55:08 [yamx]
Roland: Section 4, table there. action elements say..
14:55:51 [yamx]
Roland: why we repeat the global "id" here. trouble to reconcile..
14:56:48 [yamx]
Steven: Action does, script doesn't, wired.
14:57:03 [yamx]
s/wired/wierd/
14:57:48 [yamx]
Roland: put if and while on action and script.
14:58:01 [yamx]
Roland: not having it in listener.
14:58:20 [ShaneM]
<div ev:
14:58:26 [yamx]
Shane: not in global attributes, which you can attach to anything.
14:59:13 [ShaneM]
ACTION: migrate @if and @while from listener element to the handler elements
14:59:15 [yamx]
Steven: if refers to action only.
14:59:25 [yamx]
s/if/"if"/
14:59:35 [yamx]
s/"if"/@if/
14:59:44 [ShaneM]
s/migrate/Shane to migrate/
15:00:51 [ShaneM]
Shane: is the Basic Profile still useful? Section 3.6
15:02:02 [yamx]
Shane: Yam, do we need mobile profile? From Panasonic request a while ago.
15:02:38 [yamx]
Yam: OMA currently has no plan to cutomize Events, with troubles with DOM2 events and DOM3 events, no more fragmentations or burdens to thnk about.
15:03:58 [yamx]
Yam: restrictions will be good, but we need some good guidance, which we don't come up with now.
15:04:09 [yamx]
s/thnk/think/
15:04:25 [Steven]
I asked Kenneth Sklandler who is an implementor of XML Events on mobile devices.
15:04:29 [Steven]
He said:
15:05:02 [Steven]
"no i do not think it is needed. the normal xml events profile is not too complex for mobile"
15:05:48 [yamx]
Shane: MXL Event 1, expressed DOM2, DOMactivate.
15:05:54 [yamx]
s/MXL/XML/
15:06:07 [yamx]
s/expressed DOM2/expressed in DOM2/
15:06:35 [yamx]
Steven: we can use DOMactivate, it is illustrative example.
15:07:15 [yamx]
Roland: XForms, we use DOMactivate.
15:07:25 [yamx]
Shane: overflow.
15:07:53 [yamx]
s/overflow./overflow. which we cannot find/
15:08:28 [yamx]
Roland: we have click, activate. which should be DOMactivate.
15:10:14 [yamx]
Steven: capture, bubble, target. strictly speaking, the figure...
15:10:46 [yamx]
.. capture stops at bubbling. Bubble does not start from target.
15:11:09 [yamx]
Roland: include link to introduction.
15:11:41 [yamx]
Steven: discussing arrows in exact defined meanings.
15:12:01 [yamx]
Roland: showing some definitions in DOM3 Events.
15:12:35 [yamx]
s/Events./Events, to Steven./
15:13:31 [yamx]
Roland: in event interface (1.4) DOM 3 Event.
15:13:41 [ShaneM]
AT_TARGET
15:13:41 [ShaneM]
The current event is in the target phase, i.e. it is being evaluated at the event target.
15:13:41 [ShaneM]
BUBBLING_PHASE
15:13:41 [ShaneM]
The current event phase is the bubbling phase.
15:13:41 [ShaneM]
CAPTURING_PHASE
15:13:42 [ShaneM]
The current event phase is the capture phase.
15:13:44 [ShaneM]
15:13:53 [Roland_]
Event Interface:
15:14:24 [Steven]
15:14:51 [oedipus]
thanks for pointers
15:15:08 [yamx]
Roland: we have to refer to it, not reproducing them.
15:16:29 [yamx]
.. 1.2 Event dispatch and DOM event flow (in DOM 3 Events)
15:16:49 [Steven]
I don't think I have *ever* needed to use the phase attribute
15:16:56 [Steven]
... the defaults are just right
15:17:44 [Steven]
... but I have no problem with adding a phase="target"
15:17:50 [Steven]
... as John Boyer requested
15:18:05 [Steven]
... or phase="attarget"
15:19:13 [yamx]
Roland: three options, if none specified, default (whatever DOM 3 event) applied.
15:19:17 [yamx]
Steven: Bubble includes target.
15:19:28 [yamx]
Steven: Bubble is default.
15:19:54 [ShaneM]
Just to be clear, what I think you are proposing is: phase ("bubble"* | "capture" | "target"),<br />
15:20:04 [yamx]
Roland: all of better descriptions and diagrams, better than we did.
15:20:18 [ShaneM]
ShaneM has left #xhtml
15:20:26 [ShaneM]
ShaneM has joined #xhtml
15:21:22 [yamx]
Roland: @phase has four values (3 phases + default). "default" is for backward-compatibility.
15:21:52 [yamx]
Roland: No specified, default. default is synonim to "bubble".
15:27:10 [Lachy]
Lachy has joined #xhtml
15:27:29 [yamx]
Roland: WML is little bit out of date in illustration.
15:28:28 [yamx]
Roland: propose 3.5 Event Hander.
15:28:40 [yamx]
Steven: we don't need 3.6.
15:28:43 [yamx]
Shane: both gone.
15:32:48 [Lachy]
Lachy has joined #xhtml
15:33:40 [yamx]
Roland: we found section 4 new things.
15:34:16 [yamx]
Roland: Chameleion to be applied to whole spec.
15:34:46 [yamx]
Roland: back to earlier discussion, hint label and help, part of content?
15:35:00 [yamx]
Roland: any consensus?
15:35:20 [yamx]
Roland: Essentially identical, but we have to doublecheck.
15:35:39 [yamx]
s/identical/identical to XForms/
15:35:51 [Steven]
XForms: action Common, Events, Action Common (Action)+
15:36:30 [Steven]
This module also defines the content set "Action", which includes the following elements:
15:36:30 [Steven]
(action|setvalue|insert|delete|setindex|toggle|setfocus|dispatch|rebuild|recalculate|revalidate|refresh|reset|
15:36:30 [Steven]
load||send|message)*
15:38:34 [Steven]
15:40:14 [yamx]
Steven: some discrepancy from XForms, dispatch and dispatchevent.
15:40:39 [Steven]
Here is dispatch in XForms 1.1:
15:40:40 [Steven]
dispatch Common, Events, Action Common, name (xsd:NMTOKEN), target (xsd:IDREF), delay (xsd:nonNegativeInteger), bubbles (xsd:boolean), cancelable (xsd:boolean) name?, target?, delay? [in any order]
15:41:16 [yamx]
s/dispathevent/dispatchEvent/
15:41:24 [oedipus]
roland, GJR gives strong +1 to Steven's earlier proposal: "So I propose adding in XML Events 2 <script src="xforms.js" type="..." implements="... NS URI ..." /> [...] @implements tells the system that if they have an implementation of that NS, ignore this scrpt" and to Shane's addition: "We can also do that with an @if that checks HASFEATURE so @implements is then a shorthand"
15:41:48 [yamx]
Steven: destid.
15:42:26 [yamx]
Shane: a conf call among Shane, Steven, Mark, over a year ago.
15:43:41 [yamx]
Shane: some collision... we identified.
15:44:01 [yamx]
Steven: I am reluctant to go all this again...
15:44:08 [yamx]
.., but..
15:44:29 [Steven]
<action event="foo"
15:44:37 [Steven]
<action ev:event=="foo"
15:44:45 [Steven]
s/==/=/
15:44:54 [yamx]
Steven: equivalent way to same things.
15:45:19 [yamx]
Shane: we prohibit putting both in the same time.
15:46:17 [ShaneM]
<dispatchEvent destid="overthere" targetid="otherhandler">
15:47:13 [yamx]
Shane: do in assistive technology.
15:47:53 [yamx]
Steven: not to handler, dispatch event to elemente, handler is maybe listening.
15:48:03 [yamx]
s/elemente/element/
15:48:50 [yamx]
Steven: Document? Document means root element?
15:49:21 [yamx]
... reading dispatchEvent element.
15:50:04 [yamx]
Steven: this is a syntactic bindint to DOM 3 Event. Let's read DOM 3 Events.
15:51:27 [Steven]
In DOM3 it is:
15:51:32 [Steven]
EventTarget.dispatchEvent(eventname)
15:52:11 [yamx]
Steven: destid is required attribute, not optional.
15:52:41 [yamx]
Roland: default works.
15:55:34 [yamx]
Steven: in DOM3, it is method, but here, it is not method in Section 4.
15:55:57 [yamx]
Steven: use "action" insted of "method" in Section 4.
15:56:04 [yamx]
Shane: done.
15:57:31 [yamx]
Shane: current words on destid... does it make sense?
15:58:19 [yamx]
Steven: "otherwise, the event is dispatched to "document". ???
15:59:31 [yamx]
Steven: events should be dispatched to "element" , nowhere to go.
15:59:54 [yamx]
Steven: in root, just hit root element. no other places to go.
16:00:09 [yamx]
s/in root/in root element case/
16:00:31 [yamx]
Steven: describing phases in root element.
16:01:07 [yamx]
Shane: whole bunches of handler in head element?
16:01:51 [yamx]
s/handler in/handlers in/
16:01:59 [yamx]
Steven: load event can only in html and body, nowhere else.
16:02:39 [yamx]
Roland: dispatch in XForms and dispatchEvent in XML Event 2 , quite different attributes...
16:02:53 [yamx]
(Steven went to a flipboard...)
16:02:54 [markbirbeck]
markbirbeck has joined #xhtml
16:03:34 [yamx]
(describing "dispatchEvent" example.)
16:04:43 [yamx]
(highlighting differences between XML Event2 and XForms 1.1)
16:06:40 [Roland__]
Roland__ has joined #xhtml
16:07:40 [yamx]
Steven: cancelable="cancelable" , that's the traditional way of boolean.
16:08:56 [yamx]
<dispatchEvent name="load"
16:09:09 [yamx]
destid="#foo"
16:09:15 [yamx]
bubbles="bubbles"
16:09:28 [yamx]
cancelable="cancelable"
16:09:58 [yamx]
s/cancelable"/cancelable" \/>/
16:10:10 [yamx]
<dispatch name="load"
16:10:17 [yamx]
target="#foo"
16:10:29 [yamx]
delay="1"
16:10:34 [yamx]
bubles="true()"
16:10:43 [yamx]
16:11:35 [Steven]
Steven has joined #xhtml
16:12:05 [ShaneM]
IanJ confirms that we need toi ask the domain lead for an extra short name xml-events2 before we can make the short name changes we discussed
16:13:23 [yamx]
s/<dispatch/in XForms 1.1 <dispatch/
16:13:30 [Steven]
And yet I pasted a text above that says we don't
16:13:47 [yamx]
s/<dispatchEvent/in XML Events 2 <dispatchEvent/
16:14:10 [Steven]
When a Working Group follows this scheme, Director approval of short names is not required; the Communications Team can allocate them (provided they are reasonable, not offensive, etc.).
16:15:21 [ShaneM]
IanJ refered me to this document:
16:15:51 [Roland_]
Roland_ has joined #xhtml
16:16:15 [yamx]
Shane: I will manage this shortname issue.
16:17:46 [Steven_]
Steven_ has joined #xhtml
16:17:55 [OedipusWrecked]
OedipusWrecked has joined #xhtml
16:18:06 [yamx]
Steven:
says that we dont need Director(domain lead) approval.
16:18:16 [yamx]
s/dont/don't/
16:19:15 [yamx]
Steven: target is global in our spec, it is a problem.
16:19:46 [yamx]
Steven: we can use name(as in XForms 1.1)
16:20:06 [Steven_]
but we have a problem with @target
16:20:11 [oedipus]
oedipus has joined #xhtml
16:20:26 [yamx]
Steven: I have a sneaking feeling that we talked with XForms group...
16:21:02 [yamx]
Steven: how about "targetid" instaed of "destid", closest possible to XForms 1.1.
16:21:13 [yamx]
s/instaed/instead/
16:22:05 [yamx]
Shane: targetid is global.
16:22:21 [yamx]
.. as in Section 3.
16:23:20 [Roland_]
dispatchEvent should be changed to dispatch
16:24:12 [Roland_]
the raise attribute should be changed to name
16:24:40 [yamx]
Shane: targetid means something different in Section 3.
16:25:03 [Roland_]
bubbles attribute should be changed to an Xpath expression that evaluates to a boolean
16:25:21 [Roland_]
cancelable attribute should be changed to an Xpath expression that evaluates to a boolean
16:25:35 [markbirbeck_]
markbirbeck_ has joined #xhtml
16:26:37 [yamx]
Steven: both target and targetid are called away.
16:26:56 [yamx]
Steven: how about "to"?
16:27:09 [yamx]
Roland: fine with me.
16:27:16 [Steven_]
<dispathc name="DOMActivate" to="#foo"
16:27:26 [Steven_]
s/thc/tch/
16:27:49 [Steven_]
rrsagent, make minutes
16:27:49 [RRSAgent]
I have made the request to generate
Steven_
16:29:10 [yamx]
Steven: OK?
16:29:20 [yamx]
Roland: let's check how much left.
16:29:23 [oedipus]
ok by me
16:29:41 [Steven_]
Regrets: Tina, Mark
16:30:14 [Steven_]
rrsagent, make minutes
16:30:14 [RRSAgent]
I have made the request to generate
Steven_
16:30:22 [yamx]
Steven: we did not resolve hint label and help.
16:30:39 [yamx]
.. as a child of action.
16:30:50 [yamx]
Steven: XForms does not do that.
16:30:58 [oedipus]
i gave +1 to your proposal, steven
16:30:58 [yamx]
Steven: but we could.
16:31:52 [yamx]
Steven: role is global attribute.
16:32:12 [yamx]
Roland: rather than xh:role..
16:32:58 [yamx]
Steven: two seperate modules, language designers will combine if necesary.
16:34:58 [yamx]
Steven: event cancelable or not is defined where the event is defined.
16:35:26 [markbirbeck_]
swdwg has just voted that rdfa should go to last call. :)
16:36:01 [Steven_]
Woh!
16:36:07 [Steven_]
Congrats all round!
16:36:27 [Steven_]
Unfortunately we resolved today that it is not yet ready
16:36:28 [ShaneM]
I will prepare a formal WD for publication. Who is sending the publication request?
16:36:39 [Steven_]
ha ha
16:36:43 [markbirbeck_]
lol
16:37:07 [Steven_]
GOod question. I suppose it needs to be a joint request
16:37:19 [Steven_]
s/GO/Go/
16:37:49 [Rich]
Rich has joined #xhtml
16:37:59 [Roland_]
the destid attribute should be changed to "to"
16:39:40 [yamx]
Roland: addEventListner has four attributes, not described.
16:39:56 [yamx]
s/Listner/Listener/
16:40:14 [yamx]
Roland: 4.5 removeEventListener has similar problems.
16:40:37 [yamx]
Roland: they are all global, not duplicating.
16:41:26 [yamx]
Roland: just to make it helpful, to describe attributes.
16:41:40 [yamx]
s/Roland: they are/Shane: they are/
16:43:29 [yamx]
Shane: addEventListener uses attributes(global attributes from Listner element).
16:43:42 [yamx]
Shane: but not disagree, add them here.
16:44:48 [ShaneM]
ACTION: Shane to ensure that all listener events are shown on all elements in the Handler module.
16:45:16 [yamx]
Steven: it is done for today.
16:45:26 [yamx]
(because it is Steven's birthday)
16:45:35 [Rich]
Rich has left #xhtml
16:45:46 [oedipus]
buon compleanno, steven -- goda il vostro pranzo!
16:45:59 [Steven_]
Gracie
16:46:06 [yamx]
Roland: we will discuss some more comments on XML Event 2 tomorrow morning (3 comments).
16:46:15 [Steven_]
Grazie
16:46:37 [Steven_]
(From JohnB)
16:46:45 [Steven_]
rrsagent, make minutes
16:46:45 [RRSAgent]
I have made the request to generate
Steven_
16:46:48 [oedipus]
good night, grazie <wink>
16:47:08 [gshults]
Happy Birthday, Steven. Please survive the night to return and participate tomorrow!
16:47:22 [yamx]
We have to survive.
16:47:22 [Steven_]
If I must
16:47:49 [ShaneM]
publication target date for rdfa-syntax is thursday
16:47:59 [oedipus]
congrats
16:48:43 [Steven_]
OK
16:52:18 [Steven_]
Where do the rdfa comments get sent to?
16:52:23 [Steven_]
html-editor?
16:52:35 [ShaneM]
yes
16:52:40 [ShaneM]
err..... I think so
16:52:45 [ShaneM]
Ralph is doing pub request
16:52:54 [Steven_]
Oh, then I will stop doing it
16:52:55 [Steven_]
:-)
16:52:57 [Steven_]
great
16:53:00 [ShaneM]
thanks!
16:53:17 [Steven_]
does he have the URL of our decision to go to last call?
16:53:29 [ShaneM]
he should.
16:53:51 [ShaneM]
no worries. I can find it if not
16:54:12 [Steven_]
tx
16:54:51 [ShaneM]
ShaneM has left #xhtml
16:55:43 [Steven_]
Topic: XHTML Basic 1.1
16:55:50 [Steven_]
scribe: Steven
16:56:21 [Steven_]
Steven: I have had a message from Steve Bratt that he is OK with our option 1 (to accept a single implementation of inputmode).
16:56:31 [Steven_]
... so we can move to PR quickly
16:56:41 [Steven_]
rrsagent, make minutes
16:56:41 [RRSAgent]
I have made the request to generate
Steven_
17:12:36 [markbirbeck]
markbirbeck has joined #xhtml
17:24:31 [oedipus]
oedipus has left #xhtml
19:33:35 [John_M_Boyer]
John_M_Boyer has joined #xhtml
20:30:16 [markbirbeck]
markbirbeck has joined #xhtml
22:36:47 [Lachy]
Lachy has joined #xhtml
22:43:26 [sbuluf]
sbuluf has joined #xhtml | http://www.w3.org/2008/02/19-xhtml-irc | CC-MAIN-2021-49 | refinedweb | 8,636 | 70.73 |
Closed Bug 714937 Opened 10 years ago Closed 9 years ago
replace GFX
_PACKED _PIXEL with an inline function
Categories
(Core :: Graphics, defect)
Tracking
()
mozilla18
People
(Reporter: jrmuizel, Assigned: Luqman)
Details
(Whiteboard: [mentor=jrmuizel][lang=c++])
Attachments
(1 file, 5 obsolete files)
No description provided.
There's no need for GFX_PACKED_PIXEL to be a macro anymore and having it as an inline function will make things cleaner.
Whiteboard: [mentor=jrmuizel][lang=c++]
Comment on attachment 639149 [details] [diff] [review] Replace the GFX_PACKED_PIXEL macro with an inline function. Review of attachment 639149 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/thebes/gfxColor.h @@ +107,5 @@ > +/** > + * Pack the 4 8-bit channels (A,R,G,B) > + * into a 32-bit packed premultiplied pixel. > + */ > +PRUint32 inline Use MOZ_ALWAYS_INLINE instead. @@ +118,5 @@ > + } else { > + return ((a) << 24) | > + (GFX_PREMULTIPLY(r,a) << 16) | > + (GFX_PREMULTIPLY(g,a) << 8) | > + (GFX_PREMULTIPLY(b,a)); GFX_PREMULTIPLY and GFX_PACKED_PIXEL_NO_PREMULTIPLY could also be changed to inline functions. But you can do that in other patches.
Attachment #639149 - Flags: review-
Use MOZ_ALWAYS_INLINE.
Attachment #639149 - Attachment is obsolete: true
Replaced the other macros as well.
Attachment #639152 - Attachment is obsolete: true
Attachment #639158 - Flags: review?(jmuizelaar)
Assignee: nobody → laden
Status: NEW → ASSIGNED
This patch is badly bitrotted (it doesn't apply at all anymore). Please rebase and attach a new patch. Also, please make sure that your patch follows the guidelines below so that all the necessary commit information is present within it. Also, bonus points if you post a link to a successful Try run with the patch! (If you don't, that's fine. I run patches myself before landing if I can't see that they already were)
Attachment #639158 - Attachment is obsolete: true
Missed some OS X specific stuff so this should fix that. As for the android build failures, for some reason they don't seem to have MOZ_ALWAYS_INLINE defined.
Attachment #658748 - Attachment is obsolete: true
Builds on both android on os x now.
Attachment #658767 - Attachment is obsolete: true
Looks good on Try, thanks for sticking with it! I'll check this in today unless someone else beats me to it.
This might have been part of a shutdown timeout (see this log: <> for an example). I backed it out for now:
It wasn't. Re-landed.
Flags: in-testsuite-
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla18
Comment on attachment 658804 [details] [diff] [review] Replace GFX_PACKED_PIXEL/GFX_PACKED_PIXEL_NO_PREMULTIPLY/GFX_PREMULTIPLY macros Review of attachment 658804 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/thebes/gfxColor.h @@ +97,5 @@ > * > * equivalent to (((c)*(a))/255) > */ > +PRUint8 MOZ_ALWAYS_INLINE gfxPreMultiply(PRUint8 c, PRUint8 a) { > + return GFX_DIVIDE_BY_255((c)*(a)); Would have been nice to get rid of the stray parens here, as in return GFX_DIVIDE_BY_255(c * a); , now this isn't a macro anymore. | https://bugzilla.mozilla.org/show_bug.cgi?id=714937 | CC-MAIN-2021-25 | refinedweb | 454 | 54.93 |
C# Preprocessor Directives
Control flow statements evaluate expressions at runtime. In contrast, the C# preprocessor is invoked during compilation. The preprocessor commands are directives to the C# compiler, specifying the sections of code to compile or identifying how to handle specific errors and warnings within the code. C# preprocessor commands can also provide directives to C# editors regarding the organization of code.
Each preprocessor directive begins with a hash symbol (#), and all preprocessor directives must appear on one line. A newline rather than a semicolon indicates the end of the directive.
A list of each preprocessor directive appears in Table 3.4.
Table 3.4. Preprocessor Directives
Excluding and Including Code (#if, #elif, #else, #endif)
Perhaps the most common use of preprocessor directives is in controlling when and how code is included. For example, to write code that could be compiled by both C# 2.0 and later compilers and the prior version 1.2 compilers, you use a preprocessor directive to exclude C# 2.0-specific code when compiling with a 1.2 compiler. You can see this in the tic-tac-toe example and in Listing 3.53.
Listing 3.53. Excluding C# 2.0 Code from a C# 1.x Compiler
#if CSHARP2 System.Console.Clear(); #endif
In this case, you call the System.Console.Clear() method, which is available only in 2.0 CLI and later versions. Using the #if and #endif preprocessor directives, this line of code will be compiled only if the preprocessor symbol CSHARP2 is defined.
Another use of the preprocessor directive would be to handle differences among platforms, such as surrounding Windows- and Linux-specific APIs with WINDOWS and LINUX #if directives. Developers often use these directives in place of multiline comments (/*...*/) because they are easier to remove by defining the appropriate symbol or via a search and replace. A final common use of the directives is for debugging. If you surround code with an #if DEBUG, you will remove the code from a release build on most IDEs. The IDEs define the DEBUG symbol by default in a debug compile and RELEASE by default for release builds.
To handle an else-if condition, you can use the #elif directive within the #if directive, instead of creating two entirely separate #if blocks, as shown in Listing 3.54.
Listing 3.54. Using #if, #elif, and #endif Directives
#if LINUX ... #elif WINDOWS ... #endif
Defining Preprocessor Symbols (#define, #undef)
You can define a preprocessor symbol in two ways. The first is with the #define directive, as shown in Listing 3.55.
Listing 3.55. A #define Example
#define CSHARP2
The second method uses the define option when compiling for .NET, as shown in Output 3.27.
Output 3.27.
>csc.exe /define:CSHARP2 TicTacToe.cs
Output 3.28 shows the same functionality using the Mono compiler.
Output 3.28.
>mcs.exe -define:CSHARP2 TicTacToe.cs
To add multiple definitions, separate them with a semicolon. The advantage of the define complier option is that no source code changes are required, so you may use the same source files to produce two different binaries.
To undefine a symbol you use the #undef directive in the same way you use #define.
Emitting Errors and Warnings (#error, #warning)
Sometimes you may want to flag a potential problem with your code. You do this by inserting #error and #warning directives to emit an error or warning, respectively. Listing 3.56 uses the tic-tac-toe sample to warn that the code does not yet prevent players from entering the same move multiple times. The results of Listing 3.56 appear in Output 3.29.
Listing 3.56. Defining a Warning with #warning
#warning
"Same move allowed multiple times."
Output 3.29.
Performing main compilation... ...\tictactoe.cs(471,16): warning CS1030: #warning: '"Same move allowed multiple times."' Build complete -- 0 errors, 1 warnings
By including the #warning directive, you ensure that the compiler will report a warning, as shown in Output 3.29. This particular warning is a way of flagging the fact that there is a potential enhancement or bug within the code. It could be a simple way of reminding the developer of a pending task.
Turning Off Warning Messages (#pragma)
Warnings are helpful because they point to code that could potentially be troublesome. However, sometimes it is preferred to turn off particular warnings explicitly because they can be ignored legitimately. C# 2.0 and later compilers provide the preprocessor #pragma directive for just this purpose (see Listing 3.57).
Listing 3.57. Using the Preprocessor #pragma Directive to Disable the #warning Directive
#pragma warning disable 1030
Note that warning numbers are prefixed with the letters CS in the compiler output. However, this prefix is not used in the #pragma warning directive. The number corresponds to the warning error number emitted by the compiler when there is no preprocessor command.
To reenable the warning, #pragma supports the restore option following the warning, as shown in Listing 3.58.
Listing 3.58. Using the Preprocessor #pragma Directive to Restore a Warning
#pragma warning restore 1030
In combination, these two directives can surround a particular block of code where the warning is explicitly determined to be irrelevant.
Perhaps one of the most common warnings to disable is CS1591, as this appears when you elect to generate XML documentation using the /doc compiler option, but you neglect to document all of the public items within your program.
nowarn:<warn list> Option
In addition to the #pragma directive, C# compilers generally support the nowarn:<warn list> option. This achieves the same result as #pragma, except that instead of adding it to the source code, you can insert the command as a compiler option. In addition, the nowarn option affects the entire compilation, and the #pragma option affects only the file in which it appears. Turning off the CS1591 warning, for example, would appear on the command line as shown in Output 3.30.
Output 3.30.
> csc /doc:generate.xml /nowarn:1591 /out:generate.exe Program.cs
Specifying Line Numbers (#line)
The #line directive controls on which line number the C# compiler reports an error or warning. It is used predominantly by utilities and designers that emit C# code. In Listing 3.59, the actual line numbers within the file appear on the left.
Listing 3.59. The #line Preprocessor Directive
124 #line 113
"TicTacToe.cs"125 #warning
"Same move allowed multiple times."126 #line default
Including the #line directive causes the compiler to report the warning found on line 125 as though it was on line 113, as shown in the compiler error message shown in Output 3.31.
Output 3.31.
Performing main compilation... ...\tictactoe.cs(113,18): warning CS1030: #warning: '"Same move allowed multiple times."' Build complete -- 0 errors, 1 warnings
Following the #line directive with default reverses the effect of all prior #line directives and instructs the compiler to report true line numbers rather than the ones designated by previous uses of the #line directive.
Hints for Visual Editors (#region, #endregion)
C# contains two preprocessor directives, #region and #endregion, that are useful only within the context of visual code editors. Code editors, such as the one in the Microsoft Visual Studio .NET IDE, can search through source code and find these directives to provide editor features when writing code. C# allows you to declare a region of code using the #region directive. You must pair the #region directive with a matching #endregion directive, both of which may optionally include a descriptive string following the directive. In addition, you may nest regions within one another.
Again, Listing 3.60 shows the tic-tac-toe program as an example.
Listing 3.60. A #region and #endregion Preprocessor Directive
... #region Display Tic-tac-toe Board #if CSHARP2 System.Console.Clear(); #endif // Display the current board; border = 0; // set the first border (border[0] = "|") // Display the top line of dashes. // ("\n---+---+---\n") System.Console.Write(borders[2]); foreach (char cell in cells) { // Write out a cell value and the border that comes after it. System.Console.Write(
" {0} {1}", cell, borders[border]); // Increment to the next border; border++; // Reset border to 0 if it is 3. if (border == 3) { border = 0; } } #endregion Display Tic-tac-toe Board ...
One example of how these preprocessor directives are used is with Microsoft Visual Studio .NET. Visual Studio .NET examines the code and provides a tree control to open and collapse the code (on the left-hand side of the code editor window) that matches the region demarcated by the #region directives (see Figure 3.5).
Figure 3.5. Collapsed Region in Microsoft Visual Studio .NET | https://www.informit.com/articles/article.aspx?p=2002831&seqNum=9 | CC-MAIN-2021-31 | refinedweb | 1,440 | 56.45 |
Dear experts,
I am doing my assignment on finding how many numbers in N digit satisfy K requirement.
However, when i am testing the code, i could compile it but i couldn't execute it.
I am wondering if i made mistake in the
x = result + y; // add them together and return it.
as when i comment this out, my program is able to execute,though i didn't get the right output value
Please kindly enlighten me.
Logic of my codes (example):
N = 15, K = 3
When N = 1, result = 0 + (N%10)
= 0 + (1%10)
= 1
y = N/10
= 1/10
= 0
x = 1 + 0
= 1
return x = 1
if (sumOfDigits(1) == K)
noOfNoSatisfy ++;
----------------------------------------------------
In the case of above, noOfSatisfy still remains as 0
-----------------------------------------------------
When N = 12, result = 0 + (N%10)
= 0 + (12%10)
= 2
y = N/10
= 12/10
= 1
x = 1 + 2
= 3
return x = 3
if (sumOfDigits(3) == K)
noOfNoSatisfy ++;
In the case of above, noOfSatisfy increase and plus 1
-----------------------------------------------------
import java.util.*; public class Digits { public static int sumOfDigits(int x) { int result =0; int y =0; //eg. Let x be 1 while(x>0){ // eg. 1 = 1 + (1%10); result = result + (x%10); //take the last digit of x // eg. 0 = 1/10 y = x/10; //take the first digit of x // eg. 1 = 1 + 0 x = result + y; // add them together and return it. } //eg. return 1 return x; } public static void main(String[] args) { int N, K, noOfNoSatisfy; Scanner sc = new Scanner (System.in); //eg. Let N be 1 N = sc.nextInt();//indicate Number //eg. Let K be 1 K = sc.nextInt();//indicate requirement number noOfNoSatisfy = 0; // to count how many time it matches with K (requirement number) for (int i = 1; i <=N; i++){ if(sumOfDigits(i) == K){//eg. 1 == K noOfNoSatisfy ++; // plus 1 to count } } System.out.println(noOfNoSatisfy); //print the number of count } } | https://www.daniweb.com/programming/software-development/threads/382532/unable-to-execute-the-program-no-compilation-error | CC-MAIN-2022-21 | refinedweb | 315 | 71.55 |
, ericP> Topic:Polle? <sandro> Topic: CREATE and DROP, named graph support is provided by storing triples in units of storage called RDF "models" 13:37:40 <bglimm> ... queries for non-existing graphs return empty results, we are considering support for named graphs at a finer granularity 13:39:22 <bglimm> ... for security, we thought about having certain namespaces (represented by RDF models) 13:42:42 <bglimm> Greg: My new store is a quad store with no bookkeeping, my old one had support for empty graphs 13:43:00 <bglimm> AxelPolleres: It seems we have a clear undecided situation 13:43:23 <LeeF> I don't really understand the full question that Axel is asking :) 13:43:25 <kasei> a clear win for punting to the service description! 13:43:39 <bglimm> ... it seems we should allow for both cases 13:44:15 <bglimm> ... one problem that could arise in the spec is that the update behaviour differs for the two cases 13:44:28 <bglimm> IvanH: I am worried about allowing both options 13:44:41 <dcharbon2> we could define select ?g where { graph ?g {} } to be empty... but then we need to figure out how to clean up bookkeeping systems... I'm leaning toward defining this to be empty 13:44:43 <bglimm> ... interoperability is limited then 13:44:49 <LeeF> I see a problem for "no bookkeeping" implementations with this sequence of statements: CREATE <g1> ; ASK FROM NAMED <g1> { GRAPH <g1> { } } -- a no bookkeeping impl would return FALSE, right? 13:44:55 <AxelPolleres> Did I get this right? on query { GRAPH <g> {}} where <g> is "unknown": andy : false; steve: false; paul : error; lee: error; david: false; matt: don't deal with named graphs yet, but prefer false ; greg: both 13:45:04 <LeeF> i don't see how we can include a CREATE statement and still have the spec bless this returning FALSE 13:45:22 <bglimm> IvanH: Some systems give error, while others give an empty answer. That's not good 13:45:41 <dcharbon2> q+ 13:46:01 <AxelPolleres> q? 13:46:28 <AndyS> axel - right for me - I also have an impl that can return true (empty graphs exist) 13:46:36 <SteveH> What about: SELECT COUNT(*) WHERE { GRAPH <g> { ?x ?y ?z } } ? 13:47:05 <LeeF> AxelPolleres, I'm sorry, I don't know if that's right for me - I need more context of the full query (incl. RDF dataset) 13:47:10 <bglimm> dcharbon2: A select for empty graphs could either return an infinite list or an empty answer 13:48:03 <AxelPolleres> THat one is NOT asking for empty graphs {GRAPH ?G {} } 13:48:44 <AxelPolleres> SELECT * {GRAPH ?G {} } 13:48:51 <dcharbon2> so, really... got the query wrong - query for all empty should return empty, otherwise it's an infinite list 13:49:05 <AxelPolleres> ASK {GRAPH <g> {} } 13:50:11 <bglimm> AndyS: This shows the difference between a dataset and a graph store 13:50:22 <LeeF> CREATE <g> ; ASK FROM NAMED <g> { GRAPH <g> { } } 13:51:02 <Souri> Souri has joined #sparql 13:51:10 <bglimm> LeeF: Should be true, but requires bookkeeping 13:51:58 <bglimm> SteveH: Create affects the graph store and asks is for the datasets 13:52:22 <AxelPolleres> CREATE <g> ; ASK { GRAPH <g> { } } 13:53:17 <bglimm> LeeF: Here the query does not define a dataset, so the implemenation decides on the default dataset 13:54:16 <bglimm> LeeF: What would the update spec say for this? 13:54:49 <bglimm> SteveH: Graph stores and datasets are different things 13:55:16 <kasei> q+ 13:55:20 <dcharbon2> q- 13:55:57 <AxelPolleres> q? 13:56:02 <bglimm> LeeF: I think there needs to be some relationship between the graph store and the datasets 13:56:12 <LeeF> AndyS, if it's already true, then I'm in agreement with you :) 13:56:37 <LeeF> 13:56:57 <bglimm> kasei: section 8.2.2 of the spec means that you have to return true or an error 13:57:27 <bglimm> SteveH: I am happy to return an error from a quad store 13:57:46 <bglimm> Sandro: Would you also be happy to support bookkeeping? 13:58:18 <bglimm> SteveH: No, I removed that from my last store because it is so much extra effort that I don't want it. Too many indexes. 13:58:55 <SteveH> q+ 13:58:57 <LeeF> q+ to wonder if his update proposal makes the quad store people unhappy 13:59:00 <bglimm> AxelPolleres: It seems we have 2 issues 1) bookkeeping yes or no 2) difference between datasets and graph stores 13:59:04 <AxelPolleres> 2 issues floating around ... i) bookkeeping yes/no and ii) date�aset vs. graphstore 13:59:39 <bglimm> SteveH: I read section 8.2.2. and I can't see it says anything about the { } pattern 13:59:59 <bglimm> AndyS: There is an explicit algebra operator that deals with that section 5.2.1 14:00:01 <AxelPolleres> q? 14:00:09 <SteveH> Zakim, ack me 14:00:09 <Zakim> I see kasei, LeeF on the speaker queue 14:00:13 <AndyS> 14:00:30 <bglimm> LeeF: Maybe we can just agree on specific details of the update language. 14:00:37 <LeeF> 14:00:46 <AxelPolleres> ... it seems we need to make a decision on ii) 14:01:06 <bglimm> ... I wonder whether the stuff on the wiki page does not satisfy anyones requirements 14:01:28 <AndyS> 3rd bullet is troublesome to me 14:01:51 <SteveH> q+ 14:01:52 <AndyS> "A SPARQL Update operation is performed in the context of a GraphStore" 14:02:08 <AndyS> (and haven't properly read the email yet - sorry) 14:02:09 <kasei> q- 14:02:15 <LeeF> ack LeeF 14:02:15 <Zakim> LeeF, you wanted to wonder if his update proposal makes the quad store people unhappy 14:02:30 <bglimm> AxelPolleres: Lets wrap up because the break is coming up. 14:02:50 <bglimm> LeeF: The wiki page mainly reflects my view. 14:03:42 <AndyS> q+ 14:04:04 <bglimm> ... I can't see that the ASK query should be allowed to return false by the spec 14:04:27 <sandro> sandro: we could say "empty graphs MAY be implicitely deleted". 14:04:45 <SteveH> +1 to sandro's suggestion 14:04:53 <LeeF> +1 to sandro's suggestion 14:04:59 <Souri> +1 14:05:00 <Souri> +1 14:05:09 <MattPerry> +1 14:05:23 <sandro> -0 14:05:27 <sandro> :-) 14:05:44 <pgearon> +1 14:06:00 <AxelPolleres> can someone else take over scribing on MIT side? 14:06:01 <dcharbon2> +1 14:06:27 <sandro> lee: not the best solution for interoperability, but maybe the best we can do. 14:06:28 <AndyS> For clarification: when? during update or immediately after whole request done (multiple oerations)? 14:25:14 <Souri> Souri has joined #sparql 14:39:28 <LeeF> scribenick: pgearon 14:39:35 <AxelPolleres> scribe: paul gearon 14:39:48 <AxelPolleres> discussing 14:39:59 <AxelPolleres> ... asa a proposal to move forward. 14:40:23 <pgearon> LeeF: would like to have consensus on the wiki page and work this into the spec 14:40:25 <AxelPolleres> q? 14:40:32 <SteveH> q- 14:40:43 <LeeF> zakim, clear the q! 14:40:43 <Zakim> I don't understand 'clear the q!', LeeF 14:40:47 <AndyS> q+ 14:40:58 <LeeF> ack AndyS 14:41:13 <pgearon> AndyS: like the proposal on the wiki, but nervous about point 3 14:41:35 <pgearon> prop 3 links dataset and graph store 14:41:57 <pgearon> LeeF: removed this point 14:43:14 <AxelPolleres> 14:43:29 <LeeF> PROPOSED: Close ISSUE-20 via adoption in SPARQL Update 14:43:38 <kasei> +1 14:43:41 <bglimm> +1 14:43:41 <dcharbon2> +1 14:43:46 <MattPerry> +1 14:43:49 <Souri> +1 14:43:53 <AndyS> +1 14:43:53 <AxelPolleres> +1 14:44:04 <sandro> +0 (I guess it's good enough, but I wish I understand the semweb implications better) 14:44:25 <pgearon> +1 14:45:25 <pgearon> SteveH: points out that different order of loading data can lead to different results. Not sure if this is a significant enough use case to be a problem for us 14:45:38 <ivan> +1 14:45:52 <AlexPassant> +1 14:45:52 <AxelPolleres> q+ to add sd: extension proposal 14:46:03 <LeeF> RESOLVED: Close ISSUE-20 via adoption in SPARQL Update, no objections, Sandro abstaining 14:46:42 <trackbot> ISSUE-20 Graphs aware stores vs. quad stores for SPARQL/update (empty graphs) notes added 14:46:46 <pgearon> AxelPolleres: wants to have an sd flag to indicate if implementations always preserve empty graphs 14:46:48 <LeeF> trackbot, close ISSUE-20 14:46:48 <trackbot> ISSUE-20 Graphs aware stores vs. quad stores for SPARQL/update (empty graphs) closed 14:48:11 <LeeF> q? 14:48:14 <LeeF> ack AxelPolleres 14:48:14 <Zakim> AxelPolleres, you wanted to add sd: extension proposal 14:48:33 <pgearon> not going to add sd: at the moment 14:49:23 <kasei> q+ 14:50:04 <pgearon> Ivan: concerned about SILENT on CREATE, but spec is already clear 14:50:41 <LeeF> ack kasei 14:51:05 <pgearon> kasei: SILENT on DROP is different in that different implementations may or may not have errors on a DROP without SILENT 14:51:32 <pgearon> AlexPassant: this is similar to CREATE, and depends on the implementation. SILENT will always cover the user 14:51:45 <AxelPolleres> greg: CREATE/DROP is not necessarily a noop on graph stores that drop empty graphs immediately, but could be an error note� should be made in the spec. 14:52:56 <sandro> paul: It's an ERROR in all implementations if you CREATE when the graph isn't empty. 14:53:40 <LeeF> I'm ambivalent about CREATE GRAPH vs. ADD GRAPH 14:53:41 <pgearon> reponse to Souri: are we covered for calling CREATE on graphs which already exist and have data 14:54:20 <AndyS> ADD is as bad - may presume GET deref 14:54:52 <AxelPolleres> INSERT GRAPH ? 14:55:35 <sandro> CREATE isn't perfect, but ADD is worse. OBTAIN? PREPARE? BEGIN? OPEN? 14:55:48 <pgearon> Ivan: concerned that CREATE is used when we are simple referring to a graph that already exists (even if it's just conceptually) 14:55:50 <sandro> STORE? REMEMBER? HOLD? 14:56:37 <pgearon> USING? 14:57:38 <AxelPolleres> INSERT GRAPH ; INSERT { } ; DELETE { } ; DELETE GRAPH 14:57:51 <SteveH> INSERT GRAPH <g>, no? 14:58:14 <AndyS> INSERT { GRAPH ..} WHERE ... is too close 14:59:24 <AxelPolleres> q? 14:59:34 <ivan> q? 14:59:41 <sandro> q+ 15:00:20 <AxelPolleres> 15:00:48 <pgearon> AxelPolleres: do we need certain syntactic constructs in the language, or do we want to change them (INSERT/CREATE and DELETE/DROP) 15:00:53 <AndyS> Origin of this: Difference between CREATE / DROP are graph mgt ; INSERT / DELETE are data manipulation 15:01:08 <AxelPolleres> q? 15:01:54 <pgearon> Sandro: what is the use case for creating a graph in the store? 15:03:17 <pgearon> LeeF: application developers often create graphs knowing that triples will not be available for it for some time (eg a week) 15:03:26 <pgearon> Sandro: so CREATE sounds like a reasonable name 15:03:36 <pgearon> Ivan: still unhappy with CREATE 15:03:47 <AndyS> q+ 15:04:09 <pgearon> Ivan: since the concept of the "graph" continues to exist whether it is in the local store or not 15:04:15 <AlexPassant> q+ 15:04:21 <AxelPolleres> q? 15:04:39 <AxelPolleres> ack sandro 15:05:18 <AxelPolleres> ack AndyS 15:05:24 <sandro> sandro: read CREATE as "create storage for" 15:06:27 <AxelPolleres> q? 15:06:28 <pgearon> AndyS: LeeF is emphasizing the local storage of the data, the other approach is more about the entire web 15:06:42 <AxelPolleres> ack AlexPassant 15:07:23 <AxelPolleres> INSERT GRAPH ; INSERT { } ; DELETE { } ; DELETE GRAPH 15:07:30 <betehess> betehess has joined #sparql 15:07:58 <AndyS> qck me 15:08:01 <AndyS> ack me 15:08:08 <sandro> guest: Alexandre Bertails, W3C 15:08:16 <pgearon> AlexPassant: the CREATE operation is about referring to the graph as being expressed locally, as opposed to being elsewhere on the web 15:08:56 <Souri> Souri has joined #sparql 15:09:23 <pgearon> Axel: do we still want CLEAR? 15:09:28 <AlexPassant> pgearon: not only CREATE actually but any SPARUL operations 15:09:47 <kasei> q+ 15:11:06 <pgearon> AndyS: CREATE/DROP explicitly for book keeping operations. DELETE/INSERT are for data 15:11:19 <LeeF> ack kasei 15:11:53 <AxelPolleres> OPTION 1: keep with CREATE/DROP/INSERT/DELETE 15:12:14 <AxelPolleres> OPTION 2: INSERT/DELETE GRAPH INSERT/DELETE pattern 15:12:31 <AxelPolleres> OPTION 3: INSERT DROP GRAPH INSERT/DELETE pattern 15:12:36 <dcharbon2> q+ 15:12:48 <LeeF> ack dcharbon2 15:13:42 <pgearon> dcharbon2: if CREATE scoped, then across a federation this makes more sense, since you're creating on an individual server 15:13:47 <LeeF> +1 to dcharbon2 scope-oriented view 15:13:58 <sandro> dcharbon2: Create is always within some scope. A federated end-point might do the create in the right place. Otherwise, it's creating a local version. 15:14:04 <sandro> +1 dcharbon2 15:14:04 <AndyS> +1 - ultimate is scope = web 15:14:45 <SteveH> LOAD <> INTO <g> :) 15:14:52 <LeeF> :D 15:15:00 <Souri> Souri has joined #sparql 15:15:03 <pgearon> Ivan: if you're considering a foreign graph, then your local assertion of a graph is referring to the foreign graph and bringing it in locally. In this case the graph is not be "created" 15:15:20 <pgearon> LeeF: It is creation, just in a scope 15:15:21 <AxelPolleres> q+ to (seriously) �ask ivan whether that is not LOAD 15:15:47 <pgearon> pgearon has left #sparql 15:15:49 <ivan> ack dcharbon2 15:15:57 <LeeF> ack dcharbon 15:15:58 <AndyS> ack dcharbon 15:15:58 <LeeF> ack AxelPolleres 15:15:59 <Zakim> AxelPolleres, you wanted to (seriously) �ask ivan whether that is not LOAD 15:16:09 <pgearon> pgearon has joined #sparql 15:16:43 <sandro> q+ 15:17:05 <sandro> q+ to suggest endpoints SHOULD reject creates for which a LOAD is possible and they dont have write access 15:17:54 <pgearon> Axel: could use LOAD to handle Ivan's usecase. LOAD could create an error if the data does not exist 15:17:57 <Souri> q+ 15:18:42 <AndyS1> AndyS1 has joined #sparql 15:18:50 <LeeF> ack sandro 15:18:50 <Zakim> sandro, you wanted to suggest endpoints SHOULD reject creates for which a LOAD is possible and they dont have write access 15:19:51 <LeeF> q+ to say I don't see any way that the Update spec. can't embrace local copies that are likely divergent from authoritative copies 15:20:19 <AxelPolleres> ack Souri 15:20:23 <SteveH> q+ 15:20:30 <Souri> q+ 15:20:36 <ivan> ack LeeF 15:20:36 <Zakim> LeeF, you wanted to say I don't see any way that the Update spec. can't embrace local copies that are likely divergent from authoritative copies 15:20:51 <SteveH> q- 15:20:53 <AxelPolleres> sorry, souri should go first. 15:21:19 <sandro> sandro: Let's advise people to only CREATE for URIs for which they are authoritative. 15:21:21 <pgearon> +q to ask if we need to revisit the need to create/delete graphs 15:21:36 <sandro> pgearon, you mean q+ 15:21:42 <AxelPolleres> q+ to suggest that text for CREATE makes clear that CREATE creates a local copy? 15:21:49 <AndyS> AndyS has joined #sparql 15:22:24 <AlexPassant> cannot we say in the spec that the scope of SPARUL is LOCAL by default ? i.e. all operations apply on local copies of the graphs 15:22:59 <pgearon> Souri: CREATE is creating a "fragment" of the data in a local store 15:24:40 <LeeF> ack Souri 15:24:47 <AndyS> +1 to scope view - too much like web-in-a-box if scope only state of the web 15:25:12 <LeeF> ack pgearon 15:25:12 <Zakim> pgearon, you wanted to ask if we need to revisit the need to create/delete graphs 15:26:06 <SteveH> What about INSERT { GRAPH <g> {} } 15:26:07 <SteveH> ? 15:26:46 <LeeF> ack AxelPolleres 15:26:46 <Zakim> AxelPolleres, you wanted to suggest that text for CREATE makes clear that CREATE creates a local copy? 15:26:49 <dcharbon2> What does DELETE { GRAPH <g> } do? 15:27:12 <Souri> I am fine with option 1 (for now, implicitly meaning local scope) 15:27:18 <LeeF> dcharbon2, nothing as far as I can tell - the template doesn't "generate" any triples, so there's nothing to remove from anywhere 15:28:11 <kasei> CREAT GRAPH? 15:28:30 <AndyS> pgearon: OPTION 4: No CREATE, DROP 15:28:38 <AxelPolleres> OPTION 4: drop create/drop alltogether? 15:29:03 <SteveH> I'm mildly in favour of not having CREATE 15:29:26 <AndyS> dcharbon: could design for bookkeepingless systems 15:30:03 <dcharbon2> +0.5 to option 4 15:30:15 <AxelPolleres> OPTION 1: keep with CREATE/DROP/INSERT/DELETE 15:30:16 <AxelPolleres> OPTION 2: INSERT/DELETE GRAPH INSERT/DELETE pattern 15:30:16 <AxelPolleres> OPTION 3: INSERT DROP GRAPH INSERT/DELETE pattern 15:30:16 <AxelPolleres> OPTION 4: Drop CREATE/DROP alltogether 15:30:31 <ivan> OPTION 5: Add/DROP/INSERT/DELETE 15:30:51 <AndyS> ADD implies LOAD to me 15:30:57 <SteveH> me too 15:31:06 <sandro> is ADD really ADD GRAPH ? 15:31:20 <sandro> ivan: yes. 15:33:03 <AndyS> +1 / -1 / -1 / maybe, need more time to consider / -1 15:33:17 <LeeF> +1 / -1 / -1 / 0 15:33:32 <ivan> -1/-1/1/maybe/1 15:33:39 <pgearon> +1 / -1 / -1 / +0.5 / -1 15:33:47 <kasei> 0.5 / 0 / 0 / 0 / 0 15:33:48 <LeeF> +1 / -1 / -1 / 0 / 0 15:33:49 <MattPerry> 0 / -1 / -1 / +1 / -1 15:33:50 <AlexPassant> 0 / 0 / 0 / +1 / 0 15:33:53 <SteveH> +0 / -1 / -1 / +0.5 / -1 15:33:55 <Souri> 0.5/-1/-1/+1/-1 15:34:01 <AxelPolleres> +1/0/0/0/+1 15:34:02 <sandro> +0 / -1 / -1 / -0 / +0 15:34:11 <bglimm> +1 / 0 / -1 / 0 / 0 15:34:57 <dcharbon2> 0/0/0/0 15:35:07 <dcharbon2> 0 15:35:29 <AndyS> 0x01FFFF--FF 15:36:16 <kasei> q+ 15:37:30 <LeeF> option 1 15:37:36 <kasei> 0 15:37:36 <AndyS> option 1 15:37:40 <SteveH> option 1 15:38:01 <MattPerry> option 1 15:38:05 <bglimm> 1 15:38:06 <AxelPolleres> STRAWPOLL: 1 (prefer Option 1)�/5 (prefer Option 5 )/0 (don't care between those two 15:38:07 <AxelPolleres> 1 15:38:13 <dcharbon2> option 1 15:38:21 <AlexPassant> 0 (but need to mention the scope in the spec. for both imo) 15:38:26 <sandro> 1 is "CREATE GRAPH", 5 is "ADD GRAPH" 15:38:47 <sandro> option 5 15:38:56 <AndyS> Option 4: DROp -> CLEAR 15:38:58 <SteveH> CLEAR <g> == DROP GRAPH <g> 15:39:04 <LeeF> DELETE WHERE { GRAPH <g1> { ?s ?p ?o } } 15:39:39 <Souri> 1 15:39:51 <LeeF> AxelPolleres: Clear preference for Option 1 over Option 5 15:40:22 <Souri> Souri has joined #sparql 15:40:36 <AxelPolleres> PROPOSED: mark CREATE/DROP at risk 15:41:16 <pgearon> +1 15:41:23 <AxelPolleres> not now...? 15:41:52 <AxelPolleres> paul: already in the spec. 15:42:06 <LeeF> ACTION: Lee and AxelPolleres to solicit feedback from community regarding CREATE/DROP upon publication of next Update working draft� 15:42:07 <trackbot> Could not create new action (failed to parse response from server) - please contact sysreq with the details of what happened. 15:42:07 <trackbot> Could not create new action (unparseable data in server response: local variable 'd' referenced before assignment) - please contact sysreq with the details of what happened. 15:42:22 <AxelPolleres> subtopic: Keep CLEAR? 15:42:49 <pgearon> +q 15:42:58 <LeeF> ack kasei 15:42:58 <kasei> q- 15:43:04 <SteveH> we're now talking about 15:43:07 <LeeF> ACTION: Lee and AxelPolleres to solicit feedback from community regarding CREATE/DROP upon publication of next Update working draft� 15:43:07 <trackbot> Could not create new action (failed to parse response from server) - please contact sysreq with the details of what happened. 15:43:07 <trackbot> Could not create new action (unparseable data in server response: local variable 'd' referenced before assignment) - please contact sysreq with the details of what happened. 15:43:34 <SteveH> q? 15:43:39 <Souri> q+ 15:43:48 <ivan> ack pgearon 15:43:49 <SteveH> q+ 15:44:05 <LeeF> ack Souri 15:44:39 <LeeF> is CLEAR <g1> identical to DELETE WHERE { GRAPH <g1> { ?s ?p ?o } } ? 15:44:50 <SteveH> LeeF, yes 15:45:03 <AxelPolleres> q? 15:45:08 <LeeF> thanks. I don't care whether or not we keep CLEAR :) 15:45:46 <LeeF> q? 15:45:52 <LeeF> ack SteveH 15:46:32 <pgearon> SteveH: no longer things that DELETE <graph> to be a good idea, since it reads like the removal of the graph rather than the contents 15:47:36 <Souri> CLEAR graph <g1> graph <g2> ? 15:47:40 <pgearon> Souri: can CLEAR be applied to multiple graphs? 15:48:01 <LeeF> LeeF: Consensus in group to keep CLEAR syntax as is. 15:48:26 <pgearon> pgearon: was this like the recent email which referred to multiple graphs (in a LOAD command) 15:48:41 <SteveH> There's also the question of the CLEAR [default graph] syntax: 15:48:43 <LeeF> subtopic: delimiters / separators 15:48:45 <LeeF> see 15:48:50 <pgearon> AxelPolleres: we will keep CLEAR (for the moment) 15:49:01 <AxelPolleres> 15:49:01 <AndyS> I put optional separators in grammar to check there is no clash problems. 15:49:06 <sandro> sandro: Why no "DELETE FROM" as in SQL? Answer from Lee: because in SPARQL, "FROM" means stuff..... 15:49:18 <AndyS> This is not an opinion. 15:50:14 <pgearon> SteveH: easy to mistype a query such that it is ambiguous for a human to determine what it means 15:50:25 <AndyS> Nice to have when one line, several operations -- nuisence multi line 15:50:30 <LeeF> I'm quite happy to have operations in SPARQL Update have a terminating character 15:51:02 <pgearon> SteveH: would like compulsory separators, but violently opposed to making them optional 15:51:13 <pgearon> +q 15:51:50 <SteveH> CLEAR DELETE DATA { <x> <y> <z> } 15:52:55 <pgearon> -q 15:53:05 <AndyS> Now have "CLEAR DEFAULT" , not just CLEAR 15:53:43 <AndyS> q+ 15:53:46 <AndyS> 15:54:42 <Souri> Souri has joined #sparql 15:54:52 <SteveH> CLEAR DEFAULT DELETE DATA { <x> <y> <z> } - but draft combined grammar 15:54:56 <SteveH> s/but/by/ 15:55:06 <AxelPolleres> DFT in the grammar should be 'DEFAULT' 15:56:14 <SteveH> CLEAR 15:56:16 <SteveH> DEFAULT 15:56:21 <SteveH> ELETE DATA { <x> <y> <z> } 15:59:00 <pgearon> +q to ask SteveH why he's so opposed to an optional separator 15:59:06 <AndyS> ack me 15:59:14 <AndyS> this is a command line issue? 15:59:32 <pgearon> Ivan: doesn't see why CLEAR DEFAULT DELETE DATA ... is ambiguous 15:59:51 <AndyS> Are we saying cmd line is exactly same as HTTP request body? May be different. 15:59:55 <AxelPolleres> OPTION 1: ";" Obligatory 15:59:55 <AxelPolleres> OPTION 2: no separators 15:59:55 <AxelPolleres> OPTION 3: ";" optional (steve objects) 16:00:28 <AndyS> OPT 3 does not work on command line BTW 16:00:37 <Souri> option 1 16:01:07 <AndyS> q+ 16:01:11 <LeeF> ack pgearon 16:01:11 <Zakim> pgearon, you wanted to ask SteveH why he's so opposed to an optional separator 16:01:13 <LeeF> ack AndyS 16:02:17 <AndyS> separator has problems as well 16:02:22 <pgearon> SteveH: optional separators makes it much harder for people to learn, since if they only ever see the separator, then they will get confused when it is not used 16:02:50 <AndyS> here an example: See CLEAR DEFAULT -- now unclear (cmd line example) 16:03:36 <pgearon> sandro: would like to see a separator and not a terminator (to avoid "separating" a null command) 16:03:40 <Souri> Souri has joined #sparql 16:04:00 <sandro> sandro: lets do semicolon terminator like in SQL -- may be necessary as terminator on command line, but optional as terminator in API. 16:04:37 <kasei> option 1 16:04:44 <LeeF> 0 16:04:46 <SteveH> +1 / -1 / -Inf 16:04:53 <MattPerry> option 1 16:04:57 <SteveH> option 1 16:04:57 <AlexPassant> 1 16:04:59 <bglimm> 1 16:04:59 <AndyS> opt 2 16:05:04 <dcharbon2> option 2 16:05:04 <AxelPolleres> Option 1 16:05:05 <sandro> option 1 16:05:06 <Souri> option 1 (';' as terminator) 16:05:12 <ivan> option 3 (sorry steve) 16:05:16 <AndyS> (can live with opt 1 as sep) 16:05:18 <pgearon> Option 2 16:06:08 <Souri> Souri has joined #sparql 16:07:13 <LeeF> are we talking about a command line or about an interactive query shell type thing? 16:07:43 <Souri> Souri has joined #sparql 16:08:34 <Souri> q+ 16:08:38 <AndyS> Let us not spec cmd line usage 16:08:38 <AxelPolleres> Option 1: 7 Option 2: 3 Option 3: 1 16:09:51 <Souri> in SQL-92, ';' is a terminator 16:10:41 <AxelPolleres> ";" as separator seems to be agreeable to everyone. 16:10:50 <AxelPolleres> terminator? 16:11:00 <LeeF> ack Souri 16:11:03 <sandro> PROPOSED: semicolons are a required separator. (we're not talking about whether they are allowed as terminators as well.) 16:11:22 <AndyS> q+ 16:11:47 <sandro> lee: why no ";" as terminator? because people want to be able to leave off a trailing ";". 16:12:01 <LeeF> ack AndyS 16:14:20 <Souri> I prefer terminator, but may go with separator 16:14:24 <AndyS> Need (';' | <EOF>) to terminate 16:14:35 <SteveH> +1 to AndyS 16:14:37 <Souri> +1 16:15:31 <AxelPolleres> why? 16:15:32 <AxelPolleres> update := atomicupdate | update ';' atomicupdate 16:15:52 <AndyS> That fixes cmd line (which I don't care if diff systems do diff things anyway) 16:16:04 <sandro> update := atomicupdate | update ';' atomicupdate | 16:17:01 <AndyS> "that" refers to (';' | <EOF>) 16:17:40 <kasei> why not sandro's? 16:17:54 <sandro> PROPOSED: semicolons are a required separator, and either ";" or <EOF> terminates 16:17:56 <pgearon> SteveH: likes Andy's suggestion since it handles having a ; at the end or not, for both protocol and a command line 16:18:46 <sandro> PROPOSED: semicolons are a required separator, and either ";" or <EOF> terminates (and empty-string is an acceptable command) 16:18:59 <AxelPolleres> q? 16:19:02 <sandro> +1 16:19:05 <ivan> +1 16:19:07 <AxelPolleres> +1 16:19:09 <dcharbon2> +1 16:19:10 <MattPerry> +1 16:19:11 <SteveH> +1 16:19:12 <Souri> +1 16:19:13 <pgearon> +1 16:19:24 <AndyS> +0.5 (need to check it works!) 16:19:37 <LeeF> 's fine with me 16:19:46 <sandro> RESOLVED: semicolons are a required separator for update operations, and either ";" or <EOF> terminates (and empty-string is an acceptable command) 16:19:47 <kasei> +1 16:21:42 <LeeF> subtopic: datasets and update 16:21:43 <LeeF> 16:25:51 <AxelPolleres> the fundamental thing here seems the base assumption: "I'd _like_ to be able to say that the RDF Dataset is a subset of the Graph Store, but given that the Graph Store defines a single unnamed 16:25:51 <AxelPolleres> graph whereas the RDF Dataset allows me to craft a default graph as the merge of multiple graphs, I don't know how to formally specify this subset relationship." 16:26:03 <AndyS> Minor : reads as INSERT ... FROM which is, to me, odd. Maybe can live with. 16:27:15 <AndyS> Think single FROM clause rule is way too clever. 16:27:34 <pgearon> SteveH: observes that it's strange that INSERT/DELETE would be affected by a "FROM" 16:28:21 <pgearon> WITH g1 INSERT { x y z } DELETE { a b c } WHERE { ... } 16:28:24 <sandro> SteveH: How about WITH is for the INSERT/DELETE and FROM is for the WHERE. 16:28:54 <AxelPolleres> q+ 16:29:18 <LeeF> SteveH: What about moving FROM to the top? 16:29:27 <AndyS> Prefer FROM only applies to WHERE. 16:29:30 <LeeF> LeeF: Not a strong feeling; diverges from what Query does 16:29:38 <pgearon> AxelPolleres: thinks there's an alternative to the base assumption 16:29:53 <pgearon> ... presumes this would only apply to the WHERE part 16:30:41 <AndyS> then prefer no WITH at all or WITH applies only to INSERT, DELETE (based on separation of concepts) 16:30:56 <pgearon> AxelPolleres: gets back to conversation at the beginning, re: graphstore-vs-dataset 16:31:22 <LeeF> AndyS, I would be OK with FROM/FROM NAMED applying only to WHERE and no WITH at all, but still think having 2 distinct concepts of a default graph for the operation is confusing 16:31:53 <AndyS> LeeF - good point. 16:44:41 <AndyS> AndyS has joined #sparql 17:11:14 <LeeF> AndyS, can you begin scribing following this break? 17:21:31 <Souri> Souri has joined #sparql 17:29:52 <Mattperry> Mattperry has joined #sparql 17:29:58 <dcharbon2> dcharbon2 has joined #sparql 17:38:49 <ericP> ericP has joined #sparql 17:39:19 <bglimm> bglimm has joined #sparql 17:39:52 <LeeF> Chair: LeeF 17:40:25 <AndyS1> AndyS1 has joined #sparql 17:40:33 <AxelPolleres> AxelPolleres has joined #sparql 17:40:38 <AndyS1> For ericP: 17:40:58 <SteveH> SteveH has joined #sparql 17:41:05 <ericP> tx AndyS1 17:41:18 <LeeF> scribenick: AndyS 17:41:32 <AndyS1> scribenick: AndyS1 17:41:37 <AndyS1> ... for some reason 17:41:44 <Souri> Souri has joined #sparql 17:41:48 <AndyS1> LeeF: update continues 17:42:02 <AxelPolleres> an additional thing people brought up was the LOAD <uri1�> LOAD <uri2> vs LOAD <uri1> <uri2> issue (which might imply other shortcuts like that as well)... in case we get there 17:42:08 <AndyS1> report from Boston lunch discussions 17:42:24 <AndyS1> pgearon: multiple load issue 17:43:22 <AndyS1> keyword WITH - introducing FROM could be along side WITH - layered appraoch 17:43:36 <AndyS1> ... explicit GRAP => use named graph 17:43:50 <AndyS1> ... else FROM / FROM NAMED 17:43:55 <AndyS1> ... else WITH 17:44:01 <AndyS1> ... else default 17:44:09 <AndyS1> s/GRAP/GRAPH/ 17:44:23 <AndyS1> Lee, others: confusing 17:44:48 <AxelPolleres> q+ 17:45:02 <LeeF> ack AxelPolleres 17:45:17 <ivan> ack AxelPolleres 17:45:53 <AndyS1> AxelPolleres: alternative FROM only for WHERE not INSERT DELETE 17:46:13 <AndyS1> q+ 17:46:33 <AndyS1> LeeF: confusing even though no lex order (Steve point) 17:46:36 <LeeF> ack AndyS 17:47:11 <LeeF> q+ AndyS 17:47:15 <LeeF> ack AndyS 17:47:49 <LeeF> AndyS: I find it slightly odd to use FROM to constrain the INSERT/DELETE template because of the case of multiple FROMs 17:48:17 <AndyS1> LeeF: compromise design 17:48:49 <AndyS1> So do we need to change scope for default graph in update part? 17:49:05 <AxelPolleres> What I want to say is that the decoupling dataset and graphstore is confusing anyways from one point of view. 17:50:36 <LeeF> q? 17:52:16 <LeeF> AndyS: What if unadorned triple patterns in INSERT/DELETE templates went against the Graph Store's default graph, and FROM only controlled the default graph for the query pattern? 17:52:27 <LeeF> LeeF: Think it's still a little weird because there are distinct default graph concepts, but I can live with it 17:52:33 <AndyS1> AndyS: error case of union FROM is confusing to me 17:53:14 <AndyS1> AndyS: do see value of setting dft graph for WHERE part 17:54:03 <pgearon> INSERT { GRAPH <g1> { x y z } } DELETE { GRAPH <g1> { a b c } } WHERE FROM <g1> { ... } 17:54:16 <pgearon> WITH <g1> INSERT { x y z } DELETE { a b c } WHERE { ... } 17:54:35 <AndyS1> Alt : either WITH or FROM not both 17:54:57 <AndyS1> pgearon: common use pattern 17:55:10 <AndyS1> ... operations on a single graph 17:55:13 <pgearon> INSERT { GRAPH <g1> { x y z } } DELETE { GRAPH <g1> { a b c } } WHERE FROM <g2> { ... } 17:55:49 <pgearon> WITH <g1> INSERT { x y z } DELETE { a b c } FROM <g2> WHERE { ... } 17:56:03 <LeeF> now what happens when i put this in the protocol with default-graph-uri=g3 17:56:46 <AxelPolleres> my understnding is that default-graph-uri= only overwrites FROM 17:57:09 <LeeF> what principle is that understanding based on, AxelPolleres? 17:57:22 <LeeF> IvanH: the above example reads confusing to me because of the DELETE { ... } FROM part 17:57:54 <AxelPolleres> leeF, same as in query, isn't it? 17:58:00 <LeeF> query doesn't have with 17:58:14 <LeeF> i don't see any principle that would make me think that default-graph-uri doesn't also override WITH 17:58:17 <LeeF> or override WITH instead 17:58:20 <AndyS1> pgearon: value DRY 17:58:30 <AndyS1> (Dont Repeat Yourself) 17:58:57 <AxelPolleres> LeeF, but protocol has default-graph-uri= which corressponds to FROM, which I suggest to just keep 17:59:35 <AndyS1> LeeF: now tending INS DEL effect real dft graph. 18:01:23 <AxelPolleres> I think the proposal is useful, exactly because it allows to insert/modify based on querying graphs possibly external to the graphstore. 18:01:31 <AxelPolleres> q? 18:01:33 <AxelPolleres> q+ 18:01:41 <LeeF> ack AxelPolleres 18:01:43 <AndyS1> ack AxelPolleres 18:01:48 <Souri> q+ 18:02:03 <sandro> sandro: The confusion in DELETE { ... } FROM ... WHERE ... is pretty bad. 18:02:33 <AndyS1> LeeF: if no spec - graph store = impl decision c.f. query 18:02:56 <LeeF> ack Souri 18:03:02 <dcharbon2> could where come first? so, FROM... WHERE... DELETE { ... } 18:03:31 <SteveH> dcharbon2's suggesting is interesting 18:03:35 <SteveH> *suggestion 18:03:42 <AndyS1> Souri: WITH clause: common use case update is on a graph that is also the WHERE part 18:03:54 <pgearon> FROM... WHERE ... INSERT .... DELETE ... 18:04:22 <SteveH> DELETE ... INSERT ... probably 18:04:55 <dcharbon2> FROM... WHERE... WITH... INSERT... DELETE... 18:05:05 <AndyS1> If we chnage order, FROM => USING maybe 18:05:39 <AxelPolleres> the proposal on the comments list was more along the lines: SELECT ... FROM ... WHERE ... INSERT ... DELETE 18:05:53 <AndyS1> Souri: SQL is declare the thing to change then WHERE data 18:05:58 <AxelPolleres> ... i.e. allowing SELECT i.e. query parts together with updates 18:05:59 <LeeF> to me, CONSTRUCT foo and INSERT foo are very similar ideas, and having them be so different is pretty weird to me 18:06:01 <AndyS1> q+ 18:06:08 <LeeF> ack AndyS1 18:06:12 <LeeF> ack AndyS 18:06:25 <AxelPolleres> q+ 18:06:33 <AndyS1> ack me 18:06:48 <LeeF> ack AxelPolleres 18:07:02 <LeeF> Souri: SQL has a returning clause for return results from a DML statement 18:08:31 <pgearon> I've recently been getting requests to return multiple result sets using an XML format that Mulgara had prior to SPARQL 18:08:58 <AndyS1> LeeF: general sentiment on whether order of works is like SQL? like query? 18:10:01 <AxelPolleres> obvious question is... if we allow SELECT ... FROM ... WHERE ... INSERT ... DELETE then why not CONSTRUCT ... FROM ... WHERE ... INSERT ... DELETE ? 18:10:34 <AndyS1> Ivan: given CONSTRUCT go that way 18:10:35 <pgearon> I've just checked, DELETE is always before INSERT (sorry about any confusion up to now) 18:11:01 <AndyS1> LeeF: what about souri's idea of WITH xor FROM ? 18:11:16 <AndyS1> ... WITH is shortcut for GRAPH everywhere 18:11:48 <AndyS1> SteveH: protocol interactions 18:11:59 <LeeF> q? 18:12:53 <AxelPolleres> interactions with protocol is e.g. if in the protocol someone uses e.g. default-graph-uri 18:13:22 <pgearon> q+ 18:13:27 <LeeF> ack pgearon 18:13:31 <AndyS1> AndyS: suggest default-graph + FROM in update => error 18:13:45 <AndyS1> ... unlike query 18:14:17 <AndyS1> ... overide can chnage a two different grah update now be a one graph update 18:15:00 <SteveH> LeeF, we do 18:15:06 <SteveH> for security, only access one graph 18:15:12 <AndyS1> same here 18:15:17 <AndyS1> q+ 18:15:25 <LeeF> INSERT T1 DELETE T2 FROM G1 { ... } 18:15:34 <LeeF> T1 and T2 act on the Graph Store's default graph 18:15:46 <LeeF> and the query pattern queries G1 as its default graph 18:15:46 <SteveH> missing WHERE? 18:16:09 <LeeF> INSERT T1 DELETE T2 FROM G1 WHERE { ... } 18:16:43 <AndyS1> LeeF: hearing consensus around the above 18:18:00 <AndyS1> AndyS: deafult-graph-uri= is whole request = many operations, some with FROM , some with WITH, some with none. 18:18:54 <LeeF> * If you use default-graph-uri, the request can't use WITH or FROM at all 18:19:12 <LeeF> * If you don't use default-graph-uri, then each operation within the request can use WITH or FROM/FROM NAMED, but not both 18:19:28 <Souri> q+ 18:19:33 <LeeF> ack AndyS 18:19:35 <kasei> q+ 18:19:55 <LeeF> ack Souri 18:20:35 <dcharbon2> q+ to ask about just ... adding [with] [delete] [insert] at the end of a query - with support for select, construct and ask for returning values 18:20:51 <AxelPolleres> q+ to ask why interaction of WITH with FROM (NAMED) is a problem *at all* 18:21:26 <LeeF> ack kasei 18:22:04 <AxelPolleres> WITH <g> FROM <g1> INSERT P1 WHERE P2 = FROM <g1> INSERT {GRAPH <g> P1} WHERE {GRAPH <g> P2} 18:22:09 <LeeF> kasei: seems very strange to forbid default-graph-uri overriding FROM in the case of a single update operation 18:22:11 <AxelPolleres> ... so what? 18:22:17 <LeeF> ack dcharbon 18:22:17 <Zakim> dcharbon, you wanted to ask about just ... adding [with] [delete] [insert] at the end of a query - with support for select, construct and ask for returning values 18:23:23 <AndyS1> dcharbon: query-ish thing followed by updates 18:23:42 <dcharbon2> [query] update-clause 18:24:28 <LeeF> dcharbon2: suggests-- [optional SPARQL result clause] [optional SPARQL dataset clause] SPARQL query pattern [optional SPARQL modifiers] [optional WITH] [optional DELETE template] [optional INSERT tempalte] 18:24:49 <AxelPolleres> ack AxelPolleres 18:24:49 <Zakim> AxelPolleres, you wanted to ask why interaction of WITH with FROM (NAMED) is a problem *at all* 18:24:49 <LeeF> ack AxelPolleres 18:24:58 <AndyS1> q+ to ask for clarification 18:26:43 <LeeF> ack AndyS 18:26:43 <Zakim> AndyS, you wanted to ask for clarification 18:27:59 <Souri> q+ 18:29:04 <LeeF> AndyS: We haven't really talked about solution modifiers with update 18:29:16 <LeeF> AndyS: does this let us return results based on the post-update state of the store? 18:29:19 <LeeF> LeeF: No, not really 18:29:33 <AxelPolleres> so�, one thing that puzzles me: if I want to ask a SELECT query to the endpoint's *graphstore* instead of the *default dataset*, I will just use the "update" protocol operation instead of the "query" operation? 18:29:36 <Souri> q- 18:29:37 <LeeF> IvanH: but you could chain multiple operations together with the subsequent one being the one to return something about the state of the graph store after update 18:29:51 <dcharbon2> AxelPolleres: good point 18:30:37 <dcharbon2> It's subversive... too subversive or abusive? 18:30:44 <AxelPolleres> q+ 18:30:50 <AndyS1> AndyS: issue with multiple SELECT in one request (mix CONSTRUCT and SELECT) 18:32:21 <LeeF> ack AxelPolleres 18:32:43 <AndyS1> AxelPolleres: mixes query on dataset and query on graph store 18:33:03 <AndyS> scribenick: AndyS 18:33:07 <kasei> q+ 18:33:13 <LeeF> ack kasei 18:33:16 <AndyS> ivan: assumed treated uniformly 18:33:51 <AndyS> kasei: why query on graph store? why not endpoint makes dataset as query 18:35:23 <AndyS> AxelPolleres: dft graph for Q can be different from dft from update ? 18:36:15 <AxelPolleres> we have both "default graph" and "unnamed graph" in Update. 18:37:30 <AndyS> LeeF: do we want to include results to update other than success/failure 18:37:51 <AndyS> ... or givenb where we are now and our lifecycle do we stick to current posn 18:38:08 <AxelPolleres> I think, if we leave querying out of update, we spare the problem of unifying the dataset and graphstore. 18:38:08 <AxelPolleres> � 18:39:14 <AndyS> Feeling is stick to current update with success/failure only (time concerns) 18:39:39 <LeeF> PROPOSED: SPARQL Update requests return either success or failure only 18:39:46 <ivan> +1 18:39:54 <bglimm> +1 18:39:58 <LeeF> RESOLVED: SPARQL Update requests return either success or failure only, no abstentions or objections 18:39:58 <sandro> +1 (given time constraints) 18:40:00 <AlexPassant> q+ 18:40:05 <Souri> +1 18:40:06 <LeeF> ack AlexPassant 18:41:53 <AxelPolleres> q+ to ask, didn't we at some point have a suggestion on the table that payload for protocol has number of added/deleted triples (or was that just in some implementation) 18:42:15 <LeeF> ack AxelPolleres 18:42:15 <Zakim> AxelPolleres, you wanted to ask, didn't we at some point have a suggestion on the table that payload for protocol has number of added/deleted triples (or was that just in some 18:42:16 <sandro> we're not ruling out having various error messages; we're ruling out requring query results as part of an update operations. 18:42:19 <Zakim> ... implementation) 18:43:21 <AlexPassant> ARC2 does that as well re. number of added triples 18:43:39 <AxelPolleres> right, Alex, that's where I saw it 18:43:56 <AndyS> would be costly for one of my impls to test if triple existed everythime 18:44:16 <ivan> ivan has changed the topic to: 18:44:32 <AndyS> LeeF: want subgroup to meet and make concrete proposal for WITH/FROM/default-graph-uri= 18:44:42 <AndyS> ... inc pgearon, LeeF 18:44:59 <AndyS> ... SteveH, AndyS 18:45:20 <AlexPassant> AxelPolleres: I'm using ARC2 in lots of apps but never used that feature (success / failure is generally enough) 18:45:26 <AxelPolleres> should we action someone to answer to along the lines of the Resolution? 18:45:32 <AndyS> (but don't want to slow things down) 18:45:42 <LeeF> ACTION: Lee to coordinate with Paul, Steve, and Andy to form a coherent proposal re: datasets, FROM, FROM NAMED, WITH, default-graph-uri, and named-graph-uri 18:45:43 <trackbot> Created ACTION-206 - Coordinate with Paul, Steve, and Andy to form a coherent proposal re: datasets, FROM, FROM NAMED, WITH, default-graph-uri, and named-graph-uri [on Lee Feigenbaum - due 2010-04-01]. 18:46:00 <LeeF> ACTION: AxelPolleres to respond to 18:46:00 <trackbot> Sorry, couldn't find user - AxelPolleres 18:46:05 <LeeF> ACTION: Axel to respond to 18:46:05 <trackbot> Created ACTION-207 - Respond to [on Axel Polleres - due 2010-04-01]. 18:47:34 <kasei> i can scribe 18:47:36 <AndyS> LeeF: more update via protocol 18:47:39 <kasei> scribenick: kasei 18:48:20 <kasei> LeeF: anybody run into issues with protocol? 18:48:28 <kasei> ivan: how does it handle semicolons? 18:48:42 <kasei> LeeF: just part of the http request body 18:48:53 <AxelPolleres> q+ 18:48:58 <betehess_> betehess_ has joined #sparql 18:48:59 <LeeF> ack AxelPolleres 18:49:23 <kasei> AxelPolleres: with implicit dropping, wouldn't see a response indicating that the drop happened. 18:49:30 <kasei> dcharbon2: correct 18:49:59 <kasei> LeeF: we'll still have to define what happens if several operations succeed and some others fail 18:50:24 <kasei> pgearon: since only returning sucess/failure, probably just fail 18:50:46 <kasei> LeeF: gets into concurrency/atomicity issues 18:51:02 <LeeF> this is 18:51:16 <kasei> pgearon: not sure about atomicity. would like to recommend group of statements be done atomically where possible. 18:51:33 <kasei> ... for systems that don't support atomic operations, still should work. so atomicity should be defined as SHOULD. 18:51:34 <AndyS> +1 to SHOULD be atomic 18:51:41 <LeeF> +1 as well 18:51:58 <kasei> dcharbon2: if we want transactional, statements should be ordered based on syntax. 18:52:08 <kasei> pgearon: yes. will need to guarantee execution order. 18:52:18 <kasei> sandro: don't want to rule out eventually consistent systems. 18:52:28 <kasei> ... SHOULD may not be the right wording. 18:52:43 <kasei> pgearon: was expecting an indication in the service description indicating this. 18:53:15 <kasei> sandro: SHOULD in strict interpretation is do it unless there's a good reason not to. 18:53:23 <kasei> ... not sure being distributed is a good reason. 18:53:29 <AndyS> q+ 18:53:36 <LeeF> ack AndyS 18:53:58 <sandro> sandro: but, yeah, I guess being distributed is a good reason. 18:54:06 <kasei> AndyS: would like to say SHOULD. two cases. 1: person doing update sees consistent state of the world. 18:54:36 <kasei> ... 2: eventually consistent, but person doing update might not see consistency. 18:54:50 <kasei> pgearon: for those systems that are able to do it, needs to be service description support. 18:55:05 <kasei> sandro: ok, understanding distributed is a good reason not to do it. 18:55:36 <kasei> AndyS: should just use SHOULD, not over-interpret how people implement their systems. 18:56:02 <kasei> sandro: implementors might interpret that as shouldn't be doing distributed systems. 18:56:20 <kasei> LeeF: hearing concensus. can we close ISSUE-26? 18:56:34 <sandro> sandro: I think we should include some text saying: FOR EXAMPLE, eventually-consistent stores are fine. 18:57:54 <LeeF> PROPOSED: Close ISSUE-26 by noting that update requests SHOUDL be processed atomically and that SPARQL update provides no other transactional mechanisms 18:58:07 <ivan> s/SHOUDL/SHOULD/ 18:58:11 <AxelPolleres> +1 18:58:15 <ivan> +1 18:58:17 <pgearon> +1 18:58:18 <LeeF> PROPOSED: Close ISSUE-26 by noting that update requests SHOULD be processed atomically and that SPARQL update provides no other transactional mechanisms 18:58:20 <AndyS> +1 18:58:21 <dcharbon2> + 18:58:23 <dcharbon2> +1 18:58:28 <Mattperry> +1 18:58:31 <sandro> +1 18:58:32 <Souri> +1 18:58:38 <bglimm> +1 18:58:43 <LeeF> RESOLVED: Close ISSUE-26 by noting that update requests SHOULD be processed atomically and that SPARQL update provides no other transactional mechanisms, no abstentions or objections 18:58:49 <ivan> q+ 18:59:00 <kasei> LeeF: ISSUE-19 18:59:04 <LeeF> ack ivan 18:59:08 <sandro> issue-19? 18:59:08 <trackbot> ISSUE-19 -- Security issues on SPARQL/UPdate -- OPEN 18:59:08 <trackbot> 18:59:20 <kasei> ivan: what do we do about HTTP PATCH? 18:59:33 <kasei> LeeF: need chimezie to be involved in that discussion. 18:59:52 <kasei> ivan: spec says must use HTTP POST. might we allow PATCH? 19:00:59 <LeeF> ISSUE: Does HTTP PATCH affect either the SPARQL Protocol or the SPARQL Uniform etc. HTTP etc. Protocol? 19:00:59 <trackbot> Created ISSUE-56 - Does HTTP PATCH affect either the SPARQL Protocol or the SPARQL Uniform etc. HTTP etc. Protocol? ; please complete additional details at . 19:01:01 <ivan> -> 19:01:15 <sandro> issue-56 : 19:01:15 <trackbot> ISSUE-56 Does HTTP PATCH affect either the SPARQL Protocol or the SPARQL Uniform etc. HTTP etc. Protocol? notes added 19:01:35 <kasei> LeeF: don't want to spend time on this now. will follow up later. 19:01:36 <ivan> -> PATCH RFC on Patch 19:01:55 <kasei> ... Next: Security in SPARQL Update. 19:02:36 <kasei> SteveH: my concerns is around making the endpoint perform HTTP requests. 19:02:45 <kasei> LeeF: did we address that in SPARQL 1.0 Query? 19:03:07 <kasei> SteveH: sort of waved at. some people do it with FROM, but not everyone does this. 19:03:09 <AndyS> q+ to say that covered by any processor can reject a request in query 19:03:27 <kasei> ... we require explicitly saying you want that feature at endpoint startup. 19:03:27 <LeeF> ack AndyS 19:03:27 <Zakim> AndyS, you wanted to say that covered by any processor can reject a request in query 19:03:28 <AndyS> ack me 19:03:59 <kasei> AndyS: you can just refuse the query if you don't want to make the request. 19:04:08 <kasei> ... obligated to write a security section for the Update doc. 19:04:34 <kasei> ... usually will implement inside a container that affects this. no one single model. 19:04:51 <kasei> LeeF: does anyone think we need to do more than just a security section in the document? 19:05:01 <kasei> ivan: anything more than that will be too much given our timeline. 19:05:40 <kasei> LeeF: hoping to give advice in the spec. get a list of issues that are important. 19:05:56 <kasei> ... let's list some of the things we know are important. 19:06:06 <LeeF> * loading external data (LOAD <uri>) 19:06:24 <LeeF> ** (related: overflowing your internal store with loaded data) 19:06:44 <AxelPolleres> * similar: interpreting FROM/FROM NAMED as HTTP request 19:07:00 <LeeF> * access control 19:07:02 <AxelPolleres> ... just as in Query 19:07:02 <SteveH> and multiple LOADs gives an escalation attack 19:07:53 <LeeF> * chain authorization to foreign servers? 19:08:14 <SteveH> LOAD is a flexible pivot 19:08:52 <LeeF> * extension functions (applies to both query and update) 19:10:10 <LeeF> ISSUE-19 : Group consensus to address security considerations in SPARQL Update via Security Considerations informative text in Update spec. 19:10:10 <trackbot> ISSUE-19 Security issues on SPARQL/UPdate notes added 19:10:30 <kasei> LeeF: Concurrency (ISSUE-18) 19:10:46 <AxelPolleres> Appendix A in update doc should be updateded with that list! 19:11:12 <SteveH> 19:11:19 <kasei> ... any proposals for locking or other concurrency facilities? 19:11:42 <kasei> pgearon: I lumped it together with atomicity issues. 19:12:00 <kasei> LeeF: covered by previous atomicity discussion. 19:12:31 <kasei> ... our system rejects updates if you try to update an old version of data (pass in the version of data you think you're updating) 19:12:49 <kasei> dcharbon2: some people will never need such features. 19:13:08 <LeeF> PROPOSED: Close ISSUE-18 noting the atomicity recommendation in the resolution to ISSUE-26 and noting no plans to add any explicit mechanism for concurrency 19:13:19 <ivan> +1 19:13:25 <bglimm> +1 19:13:30 <AxelPolleres> +1 19:13:32 <AndyS> +1 19:13:35 <LeeF> RESOLVED: Close ISSUE-18 noting the atomicity recommendation in the resolution to ISSUE-26 and noting no plans to add any explicit mechanism for concurrency, no abstentions or objections 19:13:40 <pgearon> +1 19:14:46 <LeeF> kasei: Consider in protocol making use of HTTP 204 Accepted For Processing ? 19:15:15 <kasei> s/204/202/ 19:15:41 <kasei> pgearon: what happens if you load half a document and run into an error? 19:16:09 <kasei> sandro: that's just a kind of error ('your data is now corrupt') 19:16:48 <kasei> LeeF: now passed the agenda items scheduled through 10:00 this morning. 19:17:28 <kasei> LeeF: Moving on to test suite. 19:17:37 <kasei> ... sent overview on how tests were done in DAWG. 19:17:39 <AxelPolleres> topic: testing 19:18:00 <kasei> ... super manifest listing manifests which described in RDF the tests. 19:18:14 <kasei> ... each tests gave info on seting up, running, and evaluating results of the test. 19:18:19 <kasei> ... syntax tests were just the query. 19:18:39 <kasei> ... eval tests had data in default (and any named) graph(s), the query, and expected results. 19:19:01 <kasei> ... additional semantics for things like required ordering. 19:19:38 <kasei> ... asked people to return results in EARL. ericP put together system to put results in a database, generate readable reports. 19:19:57 <kasei> ... every test was characterized by facets of query language it was testing. 19:20:24 <kasei> ... when failing test X, attribute failure to the proper part of the language. 19:21:00 <kasei> ericP: we were clever in figuring out what features each query used, not sure about taking that into account on grading the results. 19:21:40 <kasei> LeeF: for update, need to start with a dataset, an update request, and expected results. 19:21:54 <kasei> ... expected results are success/failure and a state of the graphstore after the update. 19:22:10 <kasei> ... need to decide if we'll use the same manifest/vocab approach. 19:22:32 <kasei> ... ericP had suggested the use of TriG for serializing quads. 19:22:55 <kasei> ... then need to start collecting tests. 19:23:03 <ericP> 19:23:11 <SteveH> q+ 19:23:18 <ericP> -> SWObjects SPARUL tests 19:23:20 <LeeF> ack SteveH 19:23:31 <kasei> SteveH: support for using TriG. 19:23:50 <kasei> bglimm: don't know how to parse that. 19:24:04 <kasei> ... if existing tools can't parse it... 19:25:02 <kasei> ... using OWL API. can parse turtle. 19:25:20 <AndyS> While TriG is great it's not a spec and some details need to be worked out. And it's not a superset of N-Quads 19:25:58 <kasei> SteveH: could develop framework to turn TriG into HTTP updates 19:26:05 <AndyS> q+ 19:26:06 <LeeF> q+ to mention that maybe we need someone to first put forth a version of for us 19:26:06 <kasei> SteveH, did I get that right? 19:26:29 <SteveH> kasei, not just Trig, the whole thing 19:26:36 <kasei> bglimm: not able to run dawg tests right now. 19:26:56 <kasei> SteveH, could you scribe what you said? 19:27:26 <SteveH> SteveH: maybe we could colaboratively develop a test harness, which reads the manifests, and judges pass/fail 19:27:39 <kasei> ericP: in trig all the individual pieces come together in a single document. parsing burden is probably less than bookkeeping burden. 19:27:57 <kasei> ivan: don't want to put too much burden on implementors. 19:28:44 <kasei> LeeF: at some point tradeoff with our time and effort in producing a good test suite. 19:28:50 <kasei> ... this was a source of pain the first time around. 19:29:26 <kasei> ericP: will be asking a lot less of people this time even with TriG than we did last time with 1.0. 19:29:33 <SteveH> q+ 19:29:44 <LeeF> ack AndyS 19:29:45 <AxelPolleres> TriG would be used for dataset and before/after graphstore, yes? anything else? 19:29:46 <kasei> sandro: TriG only brings together the data, though? not the results or other pieces? 19:29:59 <kasei> LeeF: correct. 19:30:08 <kasei> AndyS: don't see that TriG adds much. 19:30:19 <kasei> ... not all toolkits will have support. 19:30:19 <AxelPolleres> q+ 19:31:06 <kasei> ivan: RDFa test suite, you submitted the URLs for different convertors. tool could run whole test suite. 19:31:22 <kasei> ... sent off data to various distillers, got results and ran SPARQL queries against them and checked with expected result. 19:31:39 <AxelPolleres> I *think* that this is what resulted out of that work of Michael Hausenblas for RDFa: 19:31:41 <kasei> ... each test has HTML file, and what results of SPARQL query should look like. entirely automatic. yes/no for each test. 19:31:56 <kasei> ... worked well, but requirements are much simpler than for SPARQL's case. 19:32:17 <kasei> ... setup was very simple. 19:32:19 <AxelPolleres> but he said to me that it is kinda alpha at the moment (stable mid-end 2010) 19:32:24 <LeeF> q? 19:33:04 <kasei> ivan: all tests we have for 1.0 are valid for 1.1, so starting a new testing framework would be lost time. 19:33:21 <LeeF> ack SteveH 19:33:27 <pgearon> +1 on reusing current framework 19:33:52 <kasei> SteveH: agree with ericP that going to TriG is easier. 19:33:59 <kasei> ... current manifest is really complicated. 19:34:02 <AxelPolleres> q? 19:34:27 <kasei> AndyS: hearing proposal to flatten manifest files. 19:35:02 <kasei> LeeF: implementors are going to need to write new code anyway to support update tests. 19:35:24 <kasei> ivan: but we're going to add tests to Query also. 19:35:36 <ericP> -> SPARUL tests 19:36:00 <LeeF> ack AxelPolleres 19:36:15 <kasei> ivan: how many implementors are going to be upset at having to make big changes to testing infrastructure? 19:36:46 <kasei> AxelPolleres: trig sounds like good idea for results of update. 19:37:20 <kasei> ... do we need something in protocol to dump the graphstore? 19:39:13 <kasei> ivan: readapting rdfa framework for sparql would make it alpha-quality code. 19:39:33 <kasei> pgearon: in favor of leaving things as they are. queries will work the same way. update is almost the same. 19:40:16 <kasei> ... different datastructure for supporting update. will be a major change to test suite to work with that. 19:40:25 <LeeF> ack LeeF 19:40:25 <Zakim> LeeF, you wanted to mention that maybe we need someone to first put forth a version of for us 19:41:29 <kasei> LeeF: suggest we need couple of people to re-cast existing DAWG test document to describe how we'll do update tests. maybe service descriptions. 19:41:43 <kasei> ... until we do that we can't start collecting test cases. 19:41:57 <kasei> ... need volunteers. 19:42:05 <kasei> *crickets* 19:42:44 <kasei> ... otherwise will move to CR without any way to tell if features are properly implemented. 19:42:53 <kasei> ... bad for all sorts of reasons. 19:43:26 <kasei> AxelPolleres: looking for somebody to update dawg test doc and/or people to maintain manifests and tests cases. 19:43:49 <kasei> LeeF: can do it by committee if we get some basic work done. first is the dawg test doc. needs to handle update test cases. 19:44:05 <kasei> ... in ideal world we'd have test editors. 19:44:16 <AxelPolleres> I can do a first shot 19:45:08 <LeeF> ACTION: AxelPolleres to recast into SPARQL WG space and update to handle SPARQL Update test cases by April 13, 2010 19:45:08 <trackbot> Sorry, couldn't find user - AxelPolleres 19:45:15 <LeeF> ACTION: Axel to recast into SPARQL WG space and update to handle SPARQL Update test cases by April 13, 2010 19:45:15 <trackbot> Created ACTION-208 - Recast into SPARQL WG space and update to handle SPARQL Update test cases by April 13, 2010 [on Axel Polleres - due 2010-04-01]. 19:46:33 <kasei> pgearon: Axel can get in touch with me about changes needed for update syntax. Have existing thoughts on the changes. 19:47:34 <kasei> LeeF: not sure what SD testing looks like. protocol we have existing work we can build on. 19:47:52 <kasei> ... HTTP Update Protocol testing probably has to be similar to protocol+update testing. 19:48:03 <kasei> ... entailment similar to query testing. 19:48:17 <kasei> ivan: if we wanted to be formal, entailment would involve the OWL tests. 19:48:50 <LeeF> q? 19:48:56 <kasei> bglimm: could manually translate OWL tests into SPARQL queries. 19:49:23 <kasei> ivan: not testing inference engines. SPARQL should consider them correct. 19:49:34 <kasei> ... have to test relation of those inference engines to SPARQL. 19:49:46 <kasei> bglimm: you wouldn't get that with the OWL tests. 19:50:03 <kasei> ivan: right. that's not our job. 19:50:20 <kasei> ... test mechanism should be simple to show that the inference does happen. 19:50:44 <kasei> bglimm: can use same format as query tests. maybe more results. 19:51:37 <kasei> AxelPolleres: how are non-deterministic queries treated? 19:52:05 <kasei> LeeF: there were test cases for REDUCED. bits in manifest to indicate to use reduced semantics. 19:52:22 <kasei> ... don't know how that would apply to SAMPLE, for example. maybe we could come up with something. 19:52:36 <kasei> AxelPolleres: if results aren't completely ordered, not sure what we do. 19:53:10 <kasei> ... small enough test cases could list all possible results. 19:54:35 <kasei> LeeF: tomorrow's plan: query issues, not exists vs. minus 19:54:41 <kasei> ... propose we go with not exists 19:54:56 <kasei> ... property path issues 19:55:01 <betehess_> betehess_ has joined #sparql 19:55:02 <kasei> ... entailment issues 19:55:07 <kasei> ... SD issues and testing 19:55:48 <AndyS1> AndyS1 has joined #sparql 20:06:50 <LeeF> Adjourned. 20:41:11 <AndyS> AndyS has joined #sparql 20:45:53 <pgearon> pgearon has joined #sparql 23:13:52 <AxelPolleres> AxelPolleres has joined #sparql 23:59:31 <AxelPolleres> AxelPolleres has joined #sparql # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00001141 | http://www.w3.org/2009/sparql/wiki/index.php?title=Chatlog_2010-03-25&oldid=2038 | CC-MAIN-2015-32 | refinedweb | 10,570 | 65.56 |
You should try it on a 'real' graphics card and see for yourself. I have learned that it is not very wise to trust Intel when it comes to OpenGL.
You should try it on a 'real' graphics card and see for yourself. I have learned that it is not very wise to trust Intel when it comes to OpenGL.
Yes I have. But since I cleared before my depth pass my engine issued a glDrawBuffers(GL_BACK) right before clearing. And clearing later did not help, because I disabled color writing and did not...
Color and depth are ok. Color shows an all black texture and depth an all white one (verified with GIMP that it really is only a single color).
Position and normal are corrupted. So my question is...
This is correct. The problem is with the fragments that are not touched in the geometry pass.
No. Perhaps I did not make myself clear. I do only draw the cylinder once (well actually three times...
This I tried before posting here. Doesn't change anything. I assume it is because of the 'corrupt' depth buffer.
Yes, I know. This is why I am looking for help. The curves are what I called...
Hi all,
as the subject states I have problems with my deferred renderer and skyboxes. The problem is that my geometry bleeds into the areas where the skybox is drawn.
...
It looks like you are missing something, because that is exactly what deferred shading gives you. In the geometry pass you fill your FBO (G-Buffer) with all the necessary information...
This is a really strange question. I don't understand it at all. AFAIK there are only VA and CVA (compiled vertex arrays).
Perhaps it is just (very) badly worded and they want to know about pre...
I don't think so.
Yes, for OpenGL beginners.
For you a good place to start would be the link I gave you. Much to read but it will help you enormously in the long run. You will learn why...
cat /var/log/Xorg.0.log|grep -i nvidia|grep memory:
This gives you the total amount of memory as the driver reports it. As you see this is only for NVIDIA cards, I don't know anything about ATI....
Option 1:
Learn to do it without OpenGL and come back if you have any OpenGL related problems. A general C forum sounds like a good place to start.
Option 2:
Send me money and I will do it for...
IMHO yes. You could take a look at CUDA, perhaps it is better suited as it is not a graphics API designed to get input data and to output images.
Yes, with OpenSG. No, with OpenGL and real FBOs.
Give card, driver version, OS and code.
For me C/C++ always causes a lot of trouble (no matter if OpenGL is involved or not) and I know C/C++ for years and even earn my money with it. :)
OpenGL on Windows is a major PITA, too.
But after...
Disable everything that has to do with texturing. It is important to debug in small steps. First make sure that you are able to correctly render the model before looking into texturing. To achieve...
AFAIK OpenGL commands must be issued in the same thread that created the context. So if your gtk signal handler work in their own thread it is supposed to not work.
Yes. So just drop GLUT. Any reason why you do not use a GTK OpenGL widget?
1) Yes.
2) Yes, but the center of the bounding sphere is not necessarily the same as the center of your object. You can also use multiple spheres to get a better fit or use bounding boxes....
Here is some D code for you:
// Hello World in D
import std.stdio;
void main()
{
writefln("Hello World!");
}
Couldn't resist.
This is ugly. Why not create context, draw some nice OpenGL stuff and then load all the resources?
Sorry, this question is too advanced for me. Perhaps you better post in the highly advanced forum.
AFAICS you are resetting your modelview matrix in your draw function, overwriting gluLookat's changes to it.
Instead of using perspective you can just scale the text, of course. | http://www.opengl.org/discussion_boards/search.php?s=435398f25add8d414be4b05697d6546c&searchid=382426 | CC-MAIN-2013-20 | refinedweb | 714 | 85.49 |
#include <pslib.h>
int PS_begin_pattern(PSDoc *psdoc, float width, float height, float xstep, float ystep, int painttype)
A pattern can be thought of like a color for drawing and filling. Actually, PS_setcolor is used to apply a pattern. The pattern itself has given dimension an a stepping in horizontal and vertical direction. The stepping comes into effect when the area to be fill is larger than the pattern itself. In such a case the pattern will be repeated with a distance of xstep in horizontal and ystep in vertical direction.
Each call of PS_begin_pattern must be accompanied by a call to PS_end_pattern(3).
Returns identifier of the pattern or zero in case of an error. The identifier is a positiv number greater 0.
PS_end_pattern(3), PS_set_color(3)
This manual page was written by Uwe Steinmann uwe@steinmann.cx. | http://www.makelinux.net/man/3/P/PS_begin_pattern | CC-MAIN-2013-48 | refinedweb | 138 | 56.66 |
Product Version = NetBeans IDE 7.3 Beta 2 (Build 201211062253)
Operating System = Linux version 3.2.0-33-generic-pae running on i386
Java; VM; Vendor = 1.7.0_09
Runtime = Java HotSpot(TM) Client VM 23.5-b02
maven project:
opened src/main/resources/META-INF/orm.xml
click in editor.
Ctrl-S (Emacs keybinding) does nothing.
Edit -> Find is greyed out, as is Edit -> Replace.
Other buffers are still working normally.
Please check if your file is not read-only for some reason.
in a persistence.xml (ORM mapping) in a sample application, the Find feature works well.
Please zip the project (even excluding sources, or irrelevant parts), or make a sample project that exhibits the defect & attach to the defect. thanks.
(In reply to comment #1)
> Please check if your file is not read-only for some reason.
Yeah, it isn't read-only. I can edit it to my hearts content.
You are correct that find works fine in persistence.xml. It is only orm.xml that has an issue.
please note that these are 2 different types of xml documents. persistence.xml sets up the persistence context. orm.xml has actual mapping definitions (i.e. overrides the JPA class annotations).
I created manually a sample orm.xml file (using namespace). Find is enabled as usual, file is recognized as configuration file by JPA project support.
Passing to JEE team, maybe some project-specific issue ?
there is no orm.xml support on jpa support side, i.e. general xml support should work here.
also I see it was reported against beta2, is it reproducible with release?
is it valid for any file "orm.xml" with any content and location, can you provide your orm.xml or/and sample project?
also tried to crate some sample orm.xml in maven project (to have the same path), I'm using default keybinding but it shouldn't affect main menu in my opinion. Edit|find isn't disabled in my case nether right after orm.xml creation nor after ide restart.
I'm seeing the same behaviour in NetBeans 7.3: I can't use Find in orm.xml-files.
The actual file name doesn't seem to matter, but once I remove the xmlns-attribute from the root element, I can use Find again.
Created attachment 134009 [details]
Sample project
I've attached a sample project with an orm.xml file. This is an Ant project, but the bug happens in Maven projects as well.
*** Bug 227319 has been marked as a duplicate of this bug. ***
as even name doesn't matter and it's related to existence of xmlns, move to xml area
Report from old NetBeans version. Due to code changes since it was reported likely not reproducible now. Feel free to reopen if happens in 8.0.2 or 8.1. | https://netbeans.org/bugzilla/show_bug.cgi?id=225145 | CC-MAIN-2016-36 | refinedweb | 475 | 71 |
Building Distributed Apps? Use XML Web Services, Not Remoting (Mostly), Page 3
Listing 2: All of this complicated-looking code is generated by SPROXY, which is invoked by the WSDL utility.
'------------------------------------------------------------------ ' <autogenerated> ' This code was generated by a tool. ' Runtime Version: 1.1.4322.573 ' ' Microsoft.VSDesigner, 'Version 1.1.4322.573. ' Namespace localhost2 '<remarks/> <System.Diagnostics.DebuggerStepThroughAttribute(), _ System.ComponentModel.DesignerCategoryAttribute("code"), _ System.Web.Services.WebServiceBindingAttribute( _ Name:="Service1Soap", [Namespace]:= _ "")> _ Public Class Service1 Inherits System.Web.Services.Protocols.SoapHttpClientProtocol '<remarks/> Public Sub New() MyBase.New Me.Url = "" End Sub '<remarks/> <System.Web.Services.Protocols.SoapDocumentMethodAttribute( "", RequestNamespace:="", ResponseNamespace:="", Use:=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle:= _ System.Web.Services.Protocols. _ SoapParameterStyle.Wrapped)> _ Public Function HelloWorld() As String Dim results() As Object = Me.Invoke("HelloWorld", _ New Object(-1) {}) Return CType(results(0),String) End Function '<remarks/> Public Function BeginHelloWorld(ByVal callback _ As System.AsyncCallback, ByVal asyncState _ As Object) As System.IAsyncResult Return Me.BeginInvoke("HelloWorld", New Object(-1) {}, _ callback, asyncState) End Function '<remarks/> Public Function EndHelloWorld(ByVal asyncResult _ As System.IAsyncResult) As String Dim results() As Object = Me.EndInvoke(asyncResult) Return CType(results(0),String) End Function End Class End Namespace
The XML Web services technology in .NET manages all of this code for you. All you have to do is treat the Web service like a black box and interact with its external interface. Listing 3 shows how easy it is to call the HelloWorld Web method.
Listing 3: Call a Web service's methods like any other method, by creating an instance of the class and invoking the method.
Module Module1 Sub Main() Dim Service As localhost2.Service1 = New localhost2.Service1 Console.WriteLine(Service.HelloWorld()) Console.WriteLine("press enter") Console.ReadLine() End Sub End Module
In the code, you declare an instance of the Web service, including the namespace. By default, the Web service is named Service1. Like any other object, I need to create an instance of the class and then simply invoke the class's behaviors.
Advanced Web Service Information
To do even more advanced things with Web services, you need to know more. The example invoked the Web service synchronously. This means the next line of code doesn't execute until the Web method returns. If you have a long-running process, you may want to invoke the Web method asynchronously. Asynchronous invocation permits your code to go and do other things without waiting for the Web service to return.
To use asynchronous Web services, you need to be comfortable with creating and using delegates. You also need to remember that the asynchronous Web method returned comes back on a separate thread from the invoking thread. You need to be comfortable with the concept of marshalling data across threads by using the Invoke method and more delegates.
Also, you can pass and return composite types across Web services, but by default the client application works with a flattened, properties-only version of the composite type, except in the case of ADO.NET DataSets. You also can typecast a flattened Web service-defined proxy class to a class with methods, but this means deploying your business classes to consumers or consumers creating their own business classes. Deploying business classes is not recommended.
Finally, you could use Dotfuscator to obfuscate—scramble the MSIL into gibberish—to prevent consumers from using Anakrino or Reflector to decompile your business classes, but again, you may not always be able to get your business classes deployed to consumer machines. Hence, it is better to design applications that use Web services in such a manner as to not need to deploy business classes.
XML Web Services Will Do
XML Web services in .NET are built on top of .NET Remoting. For all intents and purposes, a Web service is marshal-by-value .NET Remoting. The rumor mill also suggests that .NET Remoting may be completely concealed by XML Web services in the future, making advanced features of Remoting like event sinks available using Web services.
I hope you are more comfortable with XML Web services. Now, when you have to choose between Remoting and Web services, you are prepared to make a decision. Most of the time when you create distributed applications, XML Web services will<< | http://www.developer.com/net/vb/article.php/10926_3447821_3/Building-Distributed-Apps-Use-XML-Web-Services-Not-Remoting-Mostly.htm | CC-MAIN-2014-15 | refinedweb | 714 | 50.33 |
):
So that's where I'm up to - and I'm out of ideas. Can anyone suggest what may be causing the delays and/or what I should be trying next?
Well, we finally appear to have resolved this issue in our environment. For the benefits of others, here's what we discovered and how we fixed the problem:
To try and gain further insight into what was occurring before/during/after the delays we used Wireshark on a client machine to capture/analyse network traffic whilst that client attempted to access a DFS share.
These captures showed something strange: whenever the delay occurred, in between the DFS request being sent from the client to a DC, and the referral to a DFS root server coming back from the DC to the client, the DC was sending out several broadcast name lookups to the network.
Firstly, the DC would broadcast a NetBIOS lookup for DOMAIN (where DOMAIN is our pre-Windows 2000 Active Directory domain name). A few seconds later, it would broadcast a LLMNR lookup for DOMAIN. This would be followed by yet another broadcast NetBios lookup for DOMAIN. After these three lookups had been broadcast (and I assume timed out) the DC would finally respond to the client with a (correct) referral to a DFS root server.
These broadcast name lookups for DOMAIN were only being sent when the long delay opening a DFS share occurred, and we could clearly see from the Wireshark capture that the DC wasn't returning a referral to a DFS root server until all three lookups been sent (and ~7 seconds passed). So, these broadcast name lookups were pretty obviously the cause of our delays.
Now that we knew what the problem was, we started trying to figure out why these broadcast name lookups were occurring. After a bit more Googling and some trial-and-error, we found our answer: we hadn't set the DfsDnsConfig registry key on our domain controllers to 1, as is required when using DFS in a DNS-only environment.
When we originally setup DFS in our enviroment we did read the various articles about how to configure DFS for a DNS-only environment (e.g. Microsoft KB244380 and others) and were aware of this registry key, but had misintepreted the instructions on when/how to use it.
KB244380 says:
The DFSDnsConfig registry key must be
added to each server that will
participate in the DFS namespace for
all computers to understand fully
qualified names..
Obviously we're happy with this outcome, but I would add that I'm still not 100% convinced that this is our only problem - I wonder if adding DfsDnsConfig=1 to our DCs has only worked around the problem, rather than solving it. I can't figure out why the DCs would be trying to lookup DOMAIN (the domain name itself, rather than a server in the domain) during the DFS referral process, even in a non-DNS-only environment, and I also know I haven't set DfsDnsConfig=1 on domain controllers in other (admittedly much smaller / simpler) DNS-only environments and haven't had the same issue. Still, we've solved our problem so we are happy.
I hope this is helpful to the others who are experiencing a similar issue - and thanks again to those that offered suggestions along the way.
This could be caused by the DNS server netmask ordering. We came across this recently in Server 2003. This depends on your current subnetting.
Example.
Site 1: IP subnet 10.0.0.0/24
Site 2: IP subnet 10.0.1.0/24
Client in site 2 makes a DNS query for your domain based namespace and will be given the DFS server in site 1 by default as the DNS server is not aware of the site IP boundaries. You need to tell your DNS servers what subnet mask to use to identify which IP addresses to respond with.
See
Smells like a DNS problem but anything goes. I much prefered the old FRS because the diagnostics tools like Ultrasound was so useful :7
Do you get anything in the DFS Replication Event Log on the targets? (the DFS Health report will draw its warnings from the event log)
Running without WINS is a nice goal and admirable, though I'm pretty much against this if there's any pre-Vista/2008 Windows systems around as things aren't always working as expected or as fast without WINS in my experience - though it really shouldn't matter.
The Active Directory Team Blog has a Three part article ALL about DFS Delays.
It covers the basics on the Referral Process, and then shows how to use various tools including dfsUtil and dfsDiag to discover the actual cause of the delays.
It helped me find my problem. Which turned out to be no Read permissions on the the share directory for Domain Users.
HTH,
Daniel
The client caches a DFS referral, i.e. when you enter \domain.name\namespace it will cache which actual server domain.name refers to. Once the referral expires from the cache, the client basically has to "discover" your DFS topology all over again, hence the delay.
Have a look here: and here for further info on how this works.
Possible solutions? A hacky way of going about it might be to write a small program that does a "keep alive" every few minutes; e.g. a C program that fopen's the first file it finds and immediately fclose's it. I haven't tried or tested this, and you would definitely need to give some careful consideration if you were going to do it.
We have had a similar-sounding problem, where users would experience delays (up to a minute) between clicking on a drive mapped to a DFS share, and being able to see and browse to the folders within the share.
The users also had home drives mapped to a different DFS share on the same volume, and had no delay when accessing folders there.
The difference between the two is Access-Based Enumeration (ABE) - the problem share has this enabled (it's a common drive for users, with thousands of folders - ABE means users only see those folders to which they have permissions).
Disabling ABE removed the problem entirely. Obviously this is not a solution as users then see all folders, confusing them. I have replicated the DFS share to a server with some spare disk as a temporary measure, and even with ABE enabled on this new target, the delay has gone.
The problem server is 2k3R2, and has an uptime of over 150 days (!), so it's going to get rebooted and have CHKDSK run over the offending volume. I'll post back here if this makes any difference to the problem. The new target is on a 2k8 server.
dfsutil /spcflush and dfsutil /pktflush can be a solution also in a multi site network make sure that the DFS link of the home site is coming form the local server and not from the cache.
You mention that you have 20 DFS servers yet fail to mention if all the servers are in the same facility.
If these servers are not in the same facility and each other site has it's own domain, you may want to make sure client failback is enabled.
I know the original poster was not using WINS, but I am posting for the benefit of others as we used this post the most to help solve a very similar problem. For us it ended up being someone decided to name their workstation with the same name as the domain. So, every time the DC did a lookup on the domain name for the DFS referral, it was wanting to resolve to that workstation and would cause a considerable multi-10s of seconds delay. A static 20 entry was placed into the WINS pointing at a DC and this has solved the problem. If you had no WINS, you could experiment with placing the domain name as a machine name in the LMHOSTS file pointed to a DC to get the 20 lookup, and set priority to have LMHOSTS be the first place to look at for resolving netbios,554 times
active
1 month ago | http://serverfault.com/questions/50789/long-pause-when-accessing-dfs-namespace/77126#77126 | crawl-003 | refinedweb | 1,393 | 66.17 |
Protege-OWL 3 FAQ
From Protege Wiki
Protege-OWL 3.x Frequently Asked Questions
See Also: Protege-OWL 4.x FAQ, Protege-Frames FAQ, Protege file encoding FAQ
How do I install Protege-OWL?
If you are new user, we ask that you register before downloading Protege:
Once you've registered, navigate to the download page on the Protege website to launch the platform-independent installer program:
Where do I ask questions and report bugs?
Please post comments, questions, and bug reports on the protege-owl mailing list. You must be subscribed to the list to post messages. See instructions for subscribing:. If you have trouble subscribing and/or posting to the list, send a message to the list owners.
Where can I look at a list known bugs and feature requests?
Go to the Protege Bugzilla Main Page and click "Search existing bug reports". On the Advanced Search tab, choose Protege as the Product, choose Protege-OWL as the Component, specify a version number, and click the Search button.
How do I load an OWL file?
- In Protege versions 3.2.1 and later, choose File | Open..., specify the location of your OWL file in the "Open Project" dialog, and click OK.
- In Protege 3.1.1, choose File | New Project... to bring up the "Create New Project" wizard. Check the "Create from Existing Sources" checkbox, choose "OWL Files (.owl or .rdf)" from the list of project types, and click Next. Specify the location of your OWL file in the "OWL file name or URL" field, and click Finish.
Why can't I load my OWL or RDF file?
If Protege fails to load a given OWL or RDF file, please try the following:
- Run it through the University of Manchester's OWL Validator to make sure your file is well formed.
- Run it through the OWL Syntax patcher and try reloading the result.
- If your file still does not load, post a question to the protege-owl mailing list with a link to your ontology file.
How do I work with multiple files and the import mechanism?
Please refer to our guide on managing imports.
Why does my Protege OWL/RDF file look strange?
Many people examine the OWL files produced by Protege and remark that they look strange and irregular. In particular, they find the files difficult to parse back with an XML parser. It is important to note that Protege uses the Jena parser library to save files, and thus we have little impact on the details of the output. This also means that however complex or irregular the OWL files look, you can always use Jena to parse the file for use.
How do I execute a reasoner?
Please refer to our guide on using reasoners.
How do I create numeric value restrictions such as "wheels with diameter over 10"?, Protege-OWL will support it.
Update: As of the Protege 3.2 beta release,.
How do I create properties with duplicates and/or ordered values (with rdf:Lists)?
OWL/RDF property values are normally unsorted, i.e. the order of values for a property may be different the next time you load your ontology. Also, OWL/RDF does not allow you to assign duplicates to property values. Trying to assign a duplicate value is usually prevented from by user interface. However, if the order of values or duplicates are important to you, you can use rdf:Lists. rdf:List is a predefined system class in RDF, and it is normally hidden in Protege. You can activate rdf:List in OWL | Preferences..., after which you can change the range of your property to rdf:List and set "Functional" to true, so that the property can take exactly one rdf:List as a value. Then, if you create an instance of a class where the property is used, you will get an RDFListWidget to create/add/remove/delete values for the property. This creates an rdf:List in the background.
An advanced scenario is illustrated in the list-example.owl file. You can import this file into Protege to see how it looks. It defines a subclass of rdf:List in which the entries in the list are restricted to the class Person. While rdf:List would allow values of any owl:Thing, this solution restricts the list entries, similar to a range definition on the property.
I like the traditional Protege-Frames interface, but Protege-OWL looks completely different...
For those users who are familiar with the traditional Protege-Frames user interface, the look and feel of the Protege-OWL UI may be a shock. There are many new symbols and widgets on the screen, and some of the traditional Protege-Frames features have been moved or obscured.
Sorry, but OWL is different! We tried to build an editor that provides access to as many of the advanced OWL features (such as logical class definitions) as possible. This means that the Protege-OWL UI is necessarily different from Protege-Frames.
Also, the language paradigms are different. While Protege-Frames is traditionally rather object-oriented (frame-based) with classes and slots, OWL is based on Description Logics. As a result, the usual metaphor of building a class with its attributes is not directly applicable in OWL. Rather, you use OWL to define classes by their logical characteristics and then take advantage of reasoning support.
If you want to build an OWL ontology, but still want to interact with the Protege-Frames UI, you have the following options:
- Work in Protege-Frames mode and export your file to OWL as necessary using the File | Export to Format menu item. If you use this approach, it should be noted that you won't be able to take advantage of advanced OWL features such as namespaces. Furthermore, you run the risk of using features that are not supported by OWL, such as abstract classes.
- In the Protege-OWL editor, use the simpler "Properties View" on the OWL Classes tab, instead of the "Logic View". The Properties View has a look and feel that is closer to the traditional Protege-Frames UI. You can switch to the Properties View by clicking on the Properties View radio button at the bottom right-hand corner of the OWL Classes tab.
- You could manually modify the Protege-OWL UI to make it simpler. The core Protege system supports the ability to customize the forms that the user sees for class creation, etc. For example, go to OWL | Preferences... and activate owl:Class on the Visibility tab. Then, you can navigate to the Forms tab, where you can easily replace or remove widgets you don't want to appear. For example, you could remove the conditions widget, remove the disjoint classes widget, and make the properties at class widget bigger.
- Only use RDF(S) concepts in your project (see next question).
How do I edit RDF(S) files with Protege?
OWL is an extension of RDF. Therefore, any RDF project can also be regarded as an OWL project which simply does not use advanced OWL features. While the focus of the Protege-OWL editor is on editing OWL ontologies, it can also be used to edit RDF ontologies and RDF Schema files or databases. To activate this support, go to the OWL | Preferences dialog and activate an RDF profile. When RDF is activated, Protege will display additional buttons to create pure RDFS classes and RDF properties. In particular, there will be a new button on the Properties tab, which can be used to create RDF properties. You can also decide whether new classes will be RDFS classes or OWL classes using "Create class using metaclass", or you can make rdfs:Class the default metaclass (both are done with a right-click on the classes tree in the OWL Classes tab). If you are creating a new project, you can select an RDF profile in one of the wizard pages.
Note that we generally don't recommend mixing pure RDF(S) elements with OWL elements in OWL ontologies, but Protege at least allows you to load, import and edit RDF if needed. This may be particularly important if your project requires access to ontologies or structured data that is only available as RDF(S). Also note that the support in Protege for editing RDF should not be confused with the older RDF back-end that was developed for Protege-Frames. More information about the Protege-Frames RDF back-end is available on this wiki.
If you are interested in RDF, you may want to take a look at some of the other RDF-related plug-ins that have been developed for Protege:
- RDF(s)-DB back-end: Store and load ontology and instance data from a Sesame repository.
- Oracle RDF Data Model: Manage OWL ontologies developed in Protege in the Oracle RDF store.
Please refer to the troubleshooting section in the OWLViz documentation. | http://protegewiki.stanford.edu/index.php?title=Protege-OWL_3_FAQ&direction=prev&oldid=9978 | CC-MAIN-2014-42 | refinedweb | 1,487 | 64.41 |
StringBuilder and StringBuffer – A way to create mutable Strings in Java
String– One of the most important class in Java Programming have a very important concept of Immutability i.e. once created we can not change the value of that object.
- StringBuffer Class
- StringBuilder Class
- StringBuffer Class-
- StringBuilder Class –
So what happens when you perform any operation like toUpperCase(), sub-string operations, string modification operations and other operations on Java Strings?
Whenever you perform such an operation on Strings a new object is created in the memory and you use the newly created object and the previous object remains in the memory until garbage collector cleans it from the memory.
Read more – Why Strings in Java are Mutable or Final?
Since String class is a widely used class a lot of string objects are created in the memory as our program runs.
So, don’t you think it is a performance issue?
And the answer is, YES.
Although JVM runs the Garbage collector and it keeps on cleaning the memory by removing the objects from the memory as soon as it finds them unusable or un-referenced. But Java has the concept of String Pool and the objects from the pool are not removed instantly but it took some time.
So to introduce a new feature and to create Mutable Strings Java has two classes-
In one line Difference between these two classes is that StringBuffer is thread safe whereas StringBuilder is not and StringBuilder is faster than StringBuffer.
So lets dig a little deep into this.
StringBuffer Class-
This is an older class that is used to make mutable Strings in java. Whenever we do some change in any String made by using this class then no new object is created because the changes are made in the same object.
This class is thread safe i.e. it can work properly in multiple Thread environment. The methods of this class are synchronised so that whenever multiple Threads invoke some methods on the object then they execute in the order of the method calls made by each of the individual Thread.
Main Methods are:-
- Append – Appends data at the end of String.
Example –
bufferReference.append(data);
- Insert – Inserts the data at any given Index.
Example –
bufferReference.insert( int index, data);
Both of these methods are overloaded properly so that they can accept any type of data and append or insert them into the existing string object. Rest of the methods are to perform some basic String operations like getting subString, getting char at a index, toString(), trimming etc.
package codingeekStringTutorials; public class StringBufferJava { public static void main(String[] args) { StringBuffer buffer = new StringBuffer("Codingeek"); System.out.println("name = "+buffer.toString() +", hashcode = " + buffer.hashCode()); // append Example of StringBuffer buffer.append(" - A Programmers Home"); System.out.println("name = "+buffer.toString() +", hashcode = " + buffer.hashCode()); // Insert Example of StringBuffer buffer.insert(9, ".com"); System.out.println("name = "+buffer.toString() +", hashcode = " + buffer.hashCode()); } }
Output:- name = Codingeek, hashcode = 16585653 name = Codingeek - A Programmers Home, hashcode = 16585653 name = Codingeek.com - A Programmers Home, hashcode = 16585653
In the above example the same value of Hascode every time shows that we are working on the same object every time i.e. no new object was created during any of the operations and the same explanation exists for StringBuilder.
StringBuilder Class –
This was a new but smaller version of StringBuffer class and was introduced with the release of JDK 5. It is similar to StringBuffer but it
does not provide any guarantee for thread safety or synchronisation.
It is used at the places where there is only a single Threaded application as it is much faster in most of the cases
( think yourself-> No synchronisation-> Less Overhead-> Improved performance)
Rest of it is similar to StringBuffer with same methods and declarations. We just have to use diffetent class during initialisation and the rest of the code is need not to be changed as it also have same methods.
package codingeekStringTutorials; public class StringBuilderJava { public static void main(String[] args) { StringBuilder builder = new StringBuilder("Codingeek"); System.out.println("name = "+builder.toString() +", hashcode = " + builder.hashCode()); // append Example of StringBuilder builder.append(" - A Programmers Home"); System.out.println("name = "+builder.toString() +", hashcode = " + builder.hashCode()); // Insert Example of StringBuilder builder.insert(9, ".com"); System.out.println("name = "+builder.toString() +", hashcode = " + builder.hashCode()); } }
Output:- name = Codingeek, hashcode = 16585653 name = Codingeek - A Programmers Home, hashcode = 16585653 name = Codingeek.com - A Programmers Home, hashcode = 16585653 | https://www.codingeek.com/java/strings/stringbuilder-and-stringbuffer-a-way-to-create-mutable-strings-in-java/ | CC-MAIN-2017-51 | refinedweb | 738 | 55.64 |
Fuzzy String Matching Using Python
Want to share your content on python-bloggers? click here.
In this article we will explore how to perform fuzzy string matching using Python.
Table of contents:
- Introduction
- Levenshtein Distance
- Simple Fuzzy String Matching
- Partial Fuzzy String Matching
- Out of Order Fuzzy String Matching
- Conclusion
Introduction
When working with strings matching or text analytics, we often want to find the matching parts within some variables or text. Looking at the text ourselves, we can tell that Toronto Airport and Airport Toronto are referring to the same thing, and that Torotno is just a misspelled Toronto.
But how can we solve this programmatically and have Python recognize these cases? We use fuzzy string matching!
To continue following this tutorial we will need the following Python libraries: fuzzywuzzy and python-Levenshtein.
If you don’t have it installed, please open “Command Prompt” (on Windows) and install it using the following code:
pip install fuzzywuzzy pip install python-Levenshtein
Levenshtein Distance
In order to understand the underlying calculations behind the string matching, let’s discuss the Levenshtein distance.
Levenshtein distance, in computer science, is a metric of measurement of similarity between two sequences (in our case it’s strings). It is often referred to as “edit distance”.
How so? Simply think that it calculates the minimum number of edits that should take place between two strings to make them the same. Now, the less the number of required edits is, the more similar two strings are to each other.
To learn more about Levenshtein distance and its computation, check out this article.
Simple Fuzzy String Matching
The simple ratio approach from the fuzzywuzzy library computes the standard Levenshtein distance similarity ratio between two strings which is the process for fuzzy string matching using Python.
Let’s say we have two words that are very similar to each other (with some misspelling): Airport and Airprot. By just looking at these, we can tell that they are probably the same except the misspelling. Now let’s try to quantify the similarity using simple ratio string matching:
from fuzzywuzzy import fuzz string1 = "Airport" string2 = "Airprot" print(fuzz.ratio(string1, string2))
And we get:
86
So the computed similarity between the two words is 86% which is pretty good for a misspelled word.
This approach works fine for short strings and strings or relatively similar length, but not so well for strings of different lengths. For example, what do you think will be the similarity between Airport and Toronto Airport? It’s actually lower than you think:
from fuzzywuzzy import fuzz string1 = "Airport" string2 = "Toronto Airport" print(fuzz.ratio(string1, string2))
And we get:
64
Well what happens here is that the difference in the lengths of strings plays a role. Luckily, fuzzywuzzy library has a solution for it: .partial_ratio() method.
Partial Fuzzy String Matching
Recall from the section above that when comparing Airport with Toronto Airport, we only got 64% similarity with simple string matching. In fact, in both cases we are referring to an airport that’s what we will see as a reader as well.
Because of significantly different lengths of strings we should do partial string matching. What we are interesting in here is the best match of a shorter string to a longer string.
How does it work logically? Consider two strings: Airport and Toronto Airport. We can tell right away that the first string is a substring of a second string, that is Airport is a substring of Toronto Airport, which is a perfect match:
from fuzzywuzzy import fuzz string1 = "Airport" string2 = "Toronto Airport" print(fuzz.partial_ratio(string1, string2))
And we get:
100
Out of Order Fuzzy String Matching
A common problem we may face with the strings is the order of the words. For example, how similar do you think Airport Toronto is to Toronto Airport? 100%?
Using the techniques from the above sections, we find surprisingly low results:
from fuzzywuzzy import fuzz string1 = "Airport Toronto" string2 = "Toronto Airport" print(fuzz.ratio(string1, string2)) print(fuzz.partial_ratio(string1, string2))
And we get:
47 48
That is probably much lower than you would expect? It’s only 47%-48%.
What we find that it’s not only the the similarity of substrings that matters, but also their order.
Same Length Strings
For this case, fuzzywuzzy library has a solution for it: .token_sort_ratio() method. What it does is it tokenizes the strings, then sorts the tokens alphabetically, and then does the string matching.
In our example, tokenizing Airport Toronto will keep it the same way, but tokenizing Toronto Airport will alphabetically order the substrings to get Airport Toronto. Now we are comparing Airport Toronto to Airport Toronto and you can guess we will probably get 100% similarity:
from fuzzywuzzy import fuzz string1 = "Airport Toronto" string2 = "Toronto Airport" print(fuzz.token_sort_ratio(string1,string2))
And we get:
100
Different Length Strings
For this case, fuzzywuzzy library has a solution for it: .token_set_ratio() method. What it does is it tokenizes the strings, then splits then into [intersection] and [remainder], then sorts the strings in each group alphabetically, and then does the string matching.
Consider two strings: Airport Toronto and Toronto Airport Closed. In this case, the [intersection] group will be Airport Toronto, the [remainder] of the first string will be empty, and the [remainder] of the second string will be Closed.
Logically we can see that the score will be higher for the string pairs that have a larger [intersection] group since there will be a perfect match, and the variability comes from comparison of the [remainder] groups:
from fuzzywuzzy import fuzz string1 = "Airport Toronto" string2 = "Toronto Airport Closed" print(fuzz.token_set_ratio(string1,string2))
And we get:
100
Conclusion
In this article we explored how to perform fuzzy string matching using Python.
I also encourage you to check out my other posts on Python Programming.
Feel free to leave comments below if you have any questions or have suggestions for some edits.
The post Fuzzy String Matching Using Python appeared first on PyShark.
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2021/02/fuzzy-string-matching-using-python/ | CC-MAIN-2021-10 | refinedweb | 1,013 | 60.55 |
#include <stdio.h> int puts(const char *s);
int fputs(const char *s, FILE *stream);
The puts() function writes the string pointed to by s, followed by a NEWLINE character, to the standard output stream stdout (see Intro(3)). The terminating null byte is not written.
The fputs() function writes the null-terminated string pointed to by s to the named output stream. The terminating null byte is not written.
The st_ctime and st_mtime fields of the file will be marked for update between the successful execution of fputs() and the next successful completion of a call to fflush(3C) or fclose(3C) on the same stream or a call to exit (2) or abort(3C).
On successful completion, both functions return the number of bytes written; otherwise they return EOF and set errno to indicate the error.
Unlike puts(), the fputs() function does not write a NEWLINE character at the end of the string.
See attributes(5) for descriptions of the following attributes:
exit(2), write(2), Intro(3), abort(3C), fclose(3C), ferror(3C), fflush(3C), fopen(3C), fputc(3C), printf(3C), stdio(3C), attributes(5), standards(5) | http://docs.oracle.com/cd/E36784_01/html/E36874/puts-3c.html | CC-MAIN-2016-07 | refinedweb | 190 | 67.08 |
CRIMA1028E ERROR: Error during "post-install configure" phase when upgrading to a new fix pack level for IBM HTTP Server for WebSphere Application Server for z/OS.
Here are my symptoms: imcl install com.ibm.websphere.IHS.zOS.v85_8.5.5004.20141119_2030 fails with
CRIMA1028E ERROR: Error during "post-install configure" phase:
CRIMA1028E ERROR: The directory /was/V85/ihs/bin cannot be found during the execution of the preinst.sh command. Installation Manager cannot execute the command because a required directory cannot be found. Identify the package that has the issue. Contact IBM Support.
Answer by Kelly Hasler (737) | Jan 13, 2015 at 09:40 AM
There is a problem with the way some IBM HTTP Server iFixes are built that requires the iFixes to be uninstalled before an upgrade to a new fix pack can be done. There is no problem with the actual code the iFix ships.
One known iFix with this issue is 8.5.0.2-WS-WASIHS-OS390-IFPI13028.
IBM HTTP Server iFixes built after June 2014 should not have this issue.
Use the following command to see if any iFixes are installed for the IBM HTTP Server: imcl listInstalledPackages -long
An example of this output that shows an iFix is installed is /was/V85/ihs : com.ibm.websphere.IHS.zOS.v85_8.5.5002.20140408_2037 : IBM HTTP Server for WebSphere Application Server for z/OS : 8.5.5.2 /was/V85/ihs : 8.5.5.0-WS-WASIHS-OS390-IFPI13028_8.5.5000.20140430_1732 : 8.5.5.0-WS-WASIHS-OS390-IFPI13028 : 8.5.5000.20140430_1732
If an iFix is installed for the IBM HTTP Server, uninstall the iFix. Then upgrade the IBM HTTP Server using the imcl install command.
20 people are following this question.
Getting message FSUM1004 when trying to start IBM HTTP Server powered by Apache on z/OS 1 Answer
How can I specify a schema name during set up for a federated repository database so that the objects are created in that specified namespace? 1 Answer | https://developer.ibm.com/answers/questions/170504/$%7BeditUrl%7D/ | CC-MAIN-2019-30 | refinedweb | 335 | 63.49 |
We have not yet provided a SplitView Control in JavaFX (and it is not in the plan for 1.3). However, the main reason is that it is relatively simple to write one from scratch so we’re focusing on some of the harder things (like TreeViews). I was asked recently how to go about writing a SplitView in JavaFX, so I decided to write a very short blog post with sample code from a demo I wrote for this past Devoxx.
Essentially what I did was to create a specialized Container. (Note, if we were doing this in the core platform I’d make it a Control so that it is skinnable, but in my case this was lower overhead for something specialized I was doing). It consists of a “left” and “right” side (vertical splits would be similar and is left as an exercise to the reader!). It then has a transparent rectangle which is used as the divider, and on which we place the mouse events. Whenever the divider is dragged around, the SplitView is marked as needing to be laid out. Whenever layout occurs, the left & right sides are resized to fit relative to the divider. Here’s the code (which I have not sanitized so there may well be bugs!)
[jfx]
public class SplitView extends Container {
public var left:Node;
public var right:Node;
var pressX:Number;
var initialX:Number;
var divider:Rectangle = Rectangle {
blocksMouse: true
x: 200
y: 0
width: 5
cursor: Cursor.H_RESIZE
fill: Color.TRANSPARENT
onMousePressed: function(e) {
initialX = divider.x;
pressX = e.sceneX;
}
onMouseDragged: function(e) {
var delta = e.sceneX – pressX;
divider.x = initialX + delta;
requestLayout();
}
}
override var content = bind [left, divider, right];
override function doLayout() {
// layout the left node such that it fits up to divider.x
layoutNode(left, 0, 0, divider.x, height);
divider.height = height;
layoutNode(right, divider.x + 5, 0, width – (divider.x + 5), height);
}
}[/jfx]
Cool i didn’t know about the container.
For the people who want to see the api:
it says:Base class for container nodes. A container is a resizable javafx.scene.Group whose width and height variables may be set explicitly.
Richard: Can you please post a picture so we can see how it looks like?
Sure, here’s a picture. In this case I chose to have the divider not draw anything, but using this technique you can draw it however you want. The left and right nodes here are Buttons.
Excellent,
This shows how easy it is to implement custom ui components in javafx.
How about an example of how this is used? Complete source code that produced that picture? | http://fxexperience.com/2010/01/splitviews/?replytocom=2741 | CC-MAIN-2022-33 | refinedweb | 441 | 75.3 |
I'm trying to read and manipulate audio file ,
how to read and manipulate the waves value of a wave file using python?
The SciPy libraries have great resources for this:
Writing and Reading:
import numpy as np from scipy.io import wavfile fs = 44.1e3 t = np.arange(0, 1.0, 1.0/fs) f1 = 440 f2 = 600 x = 0.5*np.sin(2*np.pi*f1*t) + 0.5*np.sin(2*np.pi*f2*t) fname = 'output.wav' wavfile.write( fname, fs, x ) fs, data = wavfile.read( fname ) print fs, data[:10]
Documentation:
This question has been asked before: How to read *.wav file in Python? | https://codedump.io/share/1tpculT3crwS/1/looking-for-reading-audio-wave-values-in-python | CC-MAIN-2017-09 | refinedweb | 109 | 79.26 |
Thanks I'll give it a try.
Type: Posts; User: Evilreaper
Thanks I'll give it a try.
yea the "How do I iterate an ArrayList and find the desired index?"
is there an example of using them the index i mean? Because before I used arraylist i actually use the normal int odd = new int[]; but didn't like putting limit numbers so i switched to arraylist.
because the numbers are being separated with 2 var which is the odd array and even array, to separate the odd and even numbers and I'm trying to search the numbers in the odd array specifically.
I...
Compatibility with the converted int since int[] and int is not comparable types so i made it an array for it to be equal
and the number that its trying to search which is 31.
if...
So Im trying to display the location for the odd number which is 31. But it won't display the location of it.
as you see in the code. Im using bluej there's no error in compiling it's just i feel...
System.out.print('\u000C'); umm the code is just part of the coding inside Bluej that allows the user to clearscreen. I read it online here.
Bluej for ICSE Schools: Clearing the terminal
ah i see that's what i did wrong... no wonder it wont run as i wanted it to be.. thx for help.~ i had 2 pick var... it works fine now. thanks again.
Sorry norm get rid of the System.out.print('\u000C');
Im using BlueJ
here..
1955
Hello I have a problem in my coding. This coding is a Quiz Game creator. Lets say I want to create 3 question in this game.
and when the game starts.. it will display the 3 question.
So at that...
Thanks it helped even though i wasn't taught on how to use it~ decided to lurk around on what u meant.
Hello, Soo my java coding has this 2 ways of paymentterm that is cash and credit.
the problem is the category of the File that is being read would be on the Payment Term.
and the payment term...
Oh thank you so much I found the problem it was a careless mistake of lowercase and uppercase sensitivity.
Scanner readFile = new Scanner(new FileReader("C:\\Users\\user\\Documents\\DIT MKJB\\Programming Java\\customer.txt"));
PrintWriter writefile = new...
while(readFile.hasNext())
{
ReceiptNo = readFile.nextDouble();
PaymentTerm = readFile.next();
CustomerName = readFile.next();
ItemNo = readFile.next();
ItemName =...
Cannot Find Symbol - Variable writeFile
And also I took a print screen just in case.
import java.util.*;
import java.io.*;
import java.util.Arrays;
public class Assignment2
{
public static void main(String[]args ) throws FileNotFoundException
{
String[] A= {"Aaron... | http://www.javaprogrammingforums.com/search.php?s=2a38541279126e032cf83f151942d9b7&searchid=1461090 | CC-MAIN-2015-14 | refinedweb | 460 | 77.33 |
dom0 Level event
<a href="#" id="hash" onclick="fn();fn1();">
<button type="button"> Go back to the top to open </button>
</a>
var btn=$('#hash').get(0);
btn.onclick=function(){
alert('111');
};
btn.onclick=function(){
alert('222');
};
Like the upper handle onclick It's in the label , All are dom0 Level event ,fn and fn1 Execute sequentially ; The second way to get elements , binding onclick Events are also dom0 level , The second one will cover the first one onclick, It will also cover the in line onclick, It just pops up 222.
dom2 Level event
$('#hash').click(function(){
alert('jq Of dom2 The first time you click ')
});
$('#hash').click(function(){
alert('jq Of dom2 Level 2 Click the second time ')
}); btn.addEventListener('click',function(){
alert(' Native dom2 For the first time click')
},false);
btn.addEventListener('click',function(){
alert(' Native dom2 The second time click')
},false)
All of the above bindings belong to dom2 Level event binding , The first two are jq How to bind , The back is all original js How to bind , Will not cover , Will execute in turn jq And native binding methods , This is Yu dom0 Go somewhere else ;
dom0 and dom2 coexistence
<a href="#" id="hash" onclick="fn();fn1();">
<button type="button"> Go back to the top to open </button>
</a>
<script type="text/javascript">
function fn(){
alert('ade');
}
function fn1(){
alert('ade111');
}
var btn=$('#hash').get(0);
btn.onclick=function(){
alert('111');
};
$('#hash').click(function(){
alert('jq Of dom2 The first time you click ')
});
btn.addEventListener('click',function(){
alert(' Native dom2 For the first time click')
},false); </script>
The above example has one or two dom0 And two dom3 Level binding events ,js Written inside dom0 Level will override... Within the line fn and fn1 Method , however js Inside dom0 You can drink dom2 coexistence , The result is a pop-up 111 jq Of dom2 The first time you click Native dom2 For the first time click;
dom0 Level events and dom2 More related articles on level 1 events
- test DOM0 Level events and DOM2 Stacking of level 1 events
1. problem If you've seen Northwind CJ The lecturer's Javascript Video tutorial , You can see that it encapsulates a strong function to add and delete events , As shown below function addEvent(obj, evtype, fn) { ...
- DOM1 Problems and Countermeasures at the national level DOM2 Level event
A few days ago, a little friend asked me a question , Why DOM 0 Level events and DOM2 Level event , But it didn't DOM1 What about the second level event ? Let's talk about it today DOM The level of the problem . At the same time, recommend partners to see the shangxuetang JavaScript BO ...
- 【20190226】JavaScript- Knowledge points record :dom0 Level event ,dom2 Level event
DOM0 Level event handler : By adding the element's event handler properties ( Such as onclick) The method of setting the value of to a function to specify the event handler is called DOM0 Level method , It's known as the method of elements , The event handler runs in the scope of the element ...
- About DOM Flow of events 、DOM0 Level events and DOM2 Level event
One .DOM Event model DOM The event model includes capture and bubbling , Capture is from the top down to the target element , Bubbling is from the current element , That is, the target element goes up to window Two . flow The concept of flow , In today's JavaScript You can see it everywhere . Than ...
- js DOM0 Level events and DOM2 Level event
There are two ways to register Events , Namely DOM0 Level and DOM2 level DOM0 Level is in the form of event binding dom The element can only have ( binding ) An event handler , His characteristic is that the same element is bound to the same event , The latter function will override the former one binding : dom.o ...
- DOM0 Level event handling 、DOM2 Level event handling
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" " ...
- About DOM2 Event capture and event bubbling for level events
DOM2 In an event addEventListener Implementation mechanism of , Multiple addEventListener At the same time, the execution sequence of adding : W3C Of DOM Event triggering is divided into three stages :①. Event capture phase , That is, from the top element ( It's usually ...
- JavaScript The default behavior of DOM2 level , Event delegation mechanism
1. Event default behavior and blocking method 1.1 Browser default behavior JavaScript The properties of the event itself , for example a The jump of the tag ,Submit Button submit , Right-click menu , Text box input, etc . 1.2 ...
- javaScript——DOM1 level ,DOM2 level ,DOM3 level
DOM0,DOM2,DOM3 There are differences in the way events are handled : JS in do ...
Random recommendation
- c# Unable to load DLL xxxxxxxx The specified module was not found . ( Exception from HRESULT:0x8007007E). A solution to this problem
I'm working on a program recently , I want to use some functions C++ It's written in DLL for C# call . But it can't be called any way , Tips " Unable to load DLL xxxxxxxx The specified module was not found . ( Exception from HRESULT:0x8007007E ...
- matlab in fopen and fprintf Function summary
matlab in fopen Function in the specified file to open the following example : *1)"fopen" Open file , Give the document a code number . grammar 1:FID= FOPEN(filename,permission) Specify with ...
- ZOJ 1642
The question : There are two strings , Each string consists of n Characters make up , Each character has a value ,Roy Each time you specify a string 2 A character in , The added value of his score is the value of this character , Then put the part in front of this character in the two strings ( Include this character ) Delete , Repeat this exercise ...
- Portfolio model (Composite)
Portfolio model (Composite) The combination pattern is sometimes called part - The whole pattern is more convenient in dealing with problems like tree structure , Look at the diagram : Look directly at the code : [java] view plaincopypublic class Tr ...
- from .Net To Java Study the second part ——IDEA and start spring boot
from .Net To Java Learn the first —— The opening The so-called work wants to do well , You must sharpen your tools first , do java The same goes for development , In the comparison of the most popular java IDE(eclipse,myeclipse.IDEA) after , I made a decision to IDE ...
- loadrunner Use in web_custom_request Function call webservice Interface
1. The interface address used : With SOAP ...
- lvalue & rvalue author : Know the user link : ...
- STM32 Each document introduces 、uCOSII The file is introduced
(1)core_cm3.c , core_cm.h: Get settings CM3 kernel , Configure some kernel registers , be used CM3 All nuclear needs : (2)stm32f10x.h and system_stm32f10x.c , syst ...
- codeforces 549F Yura and Developers( Divide and conquer 、 Heuristic merging )
codeforces 549F Yura and Developers The question Given an array , How many intervals satisfy : After removing the maximum value , And is k Multiple . Answer key Divide and conquer , For an interval , After finding the maximum , Divide into two sections . ...
- MongoDB Modify the database name ,collection name
utilize dropDatabase,copyDatabase modify Database name db.copyDatabase('old_name', 'new_name'); use old_name db.dro ... | https://chenhaoxiang.cn/2021/06/20210604140838416g.html | CC-MAIN-2022-05 | refinedweb | 1,181 | 58.21 |
selda: Multi-backend, high-level EDSL for interacting with SQL selda-0.2.0.0[back to package description].
- Seamless prepared statements.
- Configurable, automatic, consistent in-process caching of query results.
- 7.10+, as well as
:*: operator.
A table is parameterized over the types of its columns, with the column types
also separated by the
:*: operator. This, by the way, is why you need
TypeOperators.
people :: Table (Text :*: Int :*: Maybe Text) people = table "people" $ primary "name" :*: required "age" :*: optional "pet" addresses :: Table (Text :*: Text) addresses = table "addresses" $ required "name" :*: required "city"
Columns may be either
required or
optional.
Although the SQL standard supports nullable primary keys, Selda primary keys
are always required.
Running queries
Selda operations are run in the
SeldaT monad transformer, which can be layered
on top of any
MonadIO. Throughout this tutorial, we will simply use the Selda
monad
SeldaM, which is just a synonym for
SeldaT IO.
SeldaT is entered using a backend-specific
withX function. For instance,
the SQLite backend uses the
withSQLite function:
main :: IO () main = withSQLite "my_database.sqlite" $ do people <- getAllPeople liftIO (print people) getAllPeople :: SeldaM aM () setup = do createTable people createTable addresses teardown :: SeldaM ()aM ().
The following example inserts a few rows into a table with an auto-incrementing primary key:
people' :: Table (RowID :*: Text :*: Int :*: Maybe Text) people' = table "people_with_ids" $ autoPrimary "id" :*: required "name" :*: required "age" :*: optional "pet" populate' :: SeldaM () populate' = do insert_ people' [ def :*: "Link" :*: 125 :*: Just "horse" , def :*: "Velvet" :*: 19 :*: Nothing , def :*: "Kobayashi" :*: 23 :*: Just "dragon" , def :*: "Miyu" :*: 10 :*: Nothing ]
Note the use of the
def value for the
id field. This indicates that the
default value for the column should be used in lieu of any user-provided value. |
Auto-incrementing primary keys must always have the type
RowID.
Updating rows
To update a table, pass the table and two functions to the
update function.
The first is a predicate over table columns. The second is a mapping over table
columns, specifying how to update each row. Only rows satisfying the predicate
are updated.
age10Years :: SeldaM ()From operation takes a table and a predicate, specifying which rows
to delete.
The following example deletes all minors from the
people table:
byeMinors :: SeldaM () byeMinors = deleteFrom_aM ().
Selector functions
It's often annoying to explicitly take the tuples returned by queries apart.
For this reason, Selda provides a function
selectors to generate
selectors: identifiers which can be used with the
! operator to access
elements of inductive tuples similar to how record selectors are used to access
fields of standard Haskell record types.
Rewriting the previous example using selector functions:
name :*: age :*: pet = selectors people grownups :: Query s (Col s Text) grownups = do p <- select people restrict (p ! age .> 20) return (p ! name) printGrownups :: SeldaM () printGrownups = do names <- query grownups liftIO (print names)
For added convenience, the
tableWithSelectors function creates both a table
and its selector functions at the same time:
posts :: Table (RowID :*: Maybe Text :*: Text) (posts, postId :*: author :*: content) = tableWithSelectors "posts" $ autoPrimary "id" :*: optional "author" :*: required "content" allAuthors :: Query s Text allAuthors = do p <- select posts return (p ! author)
You can also use selectors with the
with function to update columns in a tuple.
with takes a tuple and a list of assignments, where each assignment is a
selector-value pair. For each assignment, the column indicated by the selector
will be set to the corresponding value, on the given tuple.
grownupsIn10Years :: Query s (Col s Text) grownupsIn10Years = do p <- select people let p' = p `with` [age := p ! age + 10] restrict (p' ! age .> 20) return (p' ! name)
Of course, selectors can be used for updates and deletions as well.
For the remainder of this tutorial, we'll keep matching on the tuples explicitly.aM () (owner' :*: count city) is shared between all Selda computations running in the same process.
Generic tables and queries
Selda also supports building tables and queries from (almost) arbitrary
data types, using the
Database.Selda.Generic module.
Re-implementing the ad hoc
people and
addresses tables from before in a
more disciplined manner in this way is quite easy:
data Person = Person { personName :: Text , age :: Int , pet :: Maybe Int } deriving Generic data Address = Address { addrName :: Text , city :: Text } deriving Generic people :: GenTable Person people = genTable "people" [personName :- primaryGen] addresses :: GenTable Address addresses = genTable "addresses" [personName :- primaryGen]
This will declare two tables with the same structure as their ad hoc predecessors. Creating the tables is similarly easy:
create :: SeldaM () create = do createTable (gen people) createTable (gen addresses)
Note the use of the
gen function here, to extract the underlying table of
columns from the generic table.
However, queries over generic tables aren't magic; they still consist of the same collections of columns as queries over non-generic tables.
genericGrownups2 :: Query s (Col s Text) genericGrownups2 = do (name :*: age :*: _) <- select (gen people) restrict (age .> 20) return name
Finally, with generics it's also quite easy to re-assemble Haskell objects
from the results of a query using the
fromRel function.
getPeopleOfAge :: Int -> SeldaM [Person] getPeopleOfAge yrs = do ps <- query $ do (name :*: age :*: _) <- select (gen people) restrict (age .== yrs) return p return (map fromRel ps). | https://hackage.haskell.org/package/selda-0.2.0.0 | CC-MAIN-2020-16 | refinedweb | 847 | 53.31 |
Today’s refactoring doesn’t come from any place specifically, just something I’ve picked up over time that I find myself using often. Any variations/comments would be appreciated to this approach. I think there’s some other good refactorings around these type of problems.
A common code smell that I come across from time to time is using exceptions to control program flow. You may see something to this effect:
1: public class Microwave
2: {
3: private IMicrowaveMotor Motor { get; set;}
4:
5: public bool Start(object food)
6: {
7: bool foodCooked = false;
8: try
9: {
10: Motor.Cook(food);
11: foodCooked = true;
12: }
13: catch(InUseException)
14: {
15: foodcooked = false;
16: }
17:
18: return foodCooked;
19: }
20: }
Exceptions should only be there to do exactly what they are for, handle exceptional behavior. Most of the time you can replace this type of code with a proper conditional and handle it properly. This is called design by contract in the after example because we are ensuring a specific state of the Motor class before performing the necessary work instead of letting an exception handle it.
1: public class Microwave
2: {
3: private IMicrowaveMotor Motor { get; set; }
4:
5: public bool Start(object food)
6: {
7: if (Motor.IsInUse)
8: return false;
9:
10: Motor.Cook(food);
11:
12: return true;
13: }
14: }
This is part of the 31 Days of Refactoring series. For a full list of Refactorings please see the original introductory post. | https://lostechies.com/seanchambers/2009/08/18/refactoring-day-18-replace-exception-with-conditional/ | CC-MAIN-2016-40 | refinedweb | 246 | 60.24 |
Utilities¶
The module
jug.utils has a few functions which are meant to be used in
writing jugfiles.
Identity¶
This is simply implemented as:
@TaskGenerator def identity(x): return x
This might seem like the most pointless function, but it can be helpful in speeding things up. Consider the following case:
from glob import glob def load(fname): return open(fname).readlines() @TaskGenerator def process(inputs, parameter): ... inputs = [] for f in glob('*.data'): inputs.extend(load(f)) # inputs is a large list results = {} for p in range(1000): results[p] = process(inputs, p)
How is this processed? Every time
process is called, a new
jug.Task is
generated. This task has two arguments:
inputs and an integer. When the hash
of the task is computed, both its arguments are analysed.
inputs is a large
list of strings. Therefore, it is going to take a very long time to process all
of the hashes.
Consider the variation:
from jug.utils import identity # ... # same as above inputs = identity(inputs) results = {} for p in range(1000): results[p] = process(inputs, p)
Now, the long list is only hashed once! It is transformed into a
Task (we
reuse the name
inputs to keep things clear) and each
process call can
now compute its hash very fast.
Using
identity to induce dependencies¶
identity can also be used to introduce dependencies. One can define a
helper function:
def value_after(val, token): from jug.utils import identity return identity( [val, token] )[0]
Now, this function, will always return its first argument, but will only run once its second argument is available. Here is a typical use case:
- Function
processtakes an output file name
- Function
postprocesstakes as input the output filename of
process
Now, you want to run
process and then
postprocess, but since
communication is done with files, Jug does not see that these functions depend
on each other.
value_after is the solution:
token = process(input, ofile='output.txt') postprocess(value_after('output.txt', token))
This works independently of whatever
process returns (even if it is
None).
jug_execute¶
This is a simple wrapper around
subprocess.call(). It adds two important
pieces of functionality:
it checks the exit code and raises an exception if not zero (this can be disabled by passing
check_exit=False).
It takes an argument called
run_afterwhich is ignored but can be used to declare dependencies between tasks. Thus, it can be used to ensure that a specific process only runs after something else has run:
from jug.utils import jug_execute from jug import TaskGenerator @TaskGenerator def my_computation(input, ouput_filename): ... token = my_computation(input, 'output.txt') # We want to run gzip, but **only after** `my_computation` has run: jug_execute(['gzip', 'output.txt'], run_after=token)
jug.utils.
timed_path(path)¶
Returns a Task object that simply returns path with the exception that it uses the paths mtime (modification time) and the file size in the hash. Thus, if the file is touched or changes size, this triggers an invalidation of the results (which propagates to all dependent tasks).
jug.utils.
identity(x)¶
identity implements the identity function as a Task (i.e.,
value(identity(x)) == x)
This seems pointless, but if
xis, for example, a very large list, then using this function might speed up some computations. Consider:
large = list(range(100000)) large = jug.utils.identity(large) for i in range(100): Task(process, large, i)
This way the list
largeis going to get hashed just once. Without the call to
jug.utils.identity, it would get hashed at each loop iteration.
- class
jug.utils.
CustomHash(obj, hash_function)¶
Set a custom hash function
This is an advanced feature and you can shoot yourself in the foot with it. Make sure you know what you are doing. In particular, hash_function should be a strong hash:
hash_function(obj0) == hash_function(obj1)is taken to imply that
obj0 == obj1
You can use the helpers in the
jug.hashmodule (in particular
hash_one) to help you. The implementation of
timed_pathis a good example of how to use a CustomHash:
pathobject (a string or bytes) is wrapped with a hashing function which checks the file value. | http://jug.readthedocs.io/en/latest/utilities.html | CC-MAIN-2017-39 | refinedweb | 679 | 57.16 |
To start with, I am making an RPG (rogue like more or less) and it is all made using the the console. I am wanting to work on remaking it using allegro. I know how to use a tile map for integers and am thinking i know how to do simple collision within the array. but i do not like having to make my maps using integers. it is difficult for me to keep track of. So two ways i came up with to solve this are as follow:
1: make the tile map read the characters and create the map that way
2: Convert character array into integer array and then from there i know how to make tile map already
maps look something like this
@ player, # wall, M mountain(mine), T tree, W water, + next map, - previous map
~ just for the sake of this forum wont let me put in a bunch of ' ' it still doesnt quite come out right. but what ever.
map[][] =
{
"##########",
"#@~~~~~MM#",
"#~~~TT~~M#",
"#~~~~~~~~#",
"#~~~WW~~~#",
"#-~~~W~~+#",
"##########"
}
so any help with this is greatly appreciated.
next, I have a very poor inventory system and it is pretty much a pain in the ass to get anything done using it... and yet i have a bunch of items already in game using all integers.
Can someone point me in the right direction to learning structs for an inventory and banking system? see the attached files for how i have it already. it works how i have it but i know there has to be a better way to do this. see lines 801 for inventory and 3844 for bank also see 2190 for cutting down wood, fishing mining and cooking are similar in structure.
Finally i want to know if there is a way to make a native message box where the buttons are custom. for example.
ALLEGRO_MESSAGEBOX_YES_NO now what if i wanted to ask the player if they want to go to the left or the right for example
ALLEGRO_MESSAGEBOX_LEFT_RIGHT I know that this doesnt work. but i want to know if there is a way of doing something along these lines
al_show_native_message_box does have a buttons argument which in principle lets you have custom buttons. It doesn't look like that's implemented under Windows though (but some quick searching suggests that this is possible).
You can use <pre></pre> tags for that :
###
To make a tilemap of integer tiles from a map of char you can simply map your chars to integers :
You have a serious need to learn how to use arrays and maps and textfiles. Your code would be hundreds of times shorter.
Even a simple std::map<std::string , int> would be useful for keeping track of inventory. You can simply use things like map["Item"]++ or map["Axes"]--.
A char in C is also an integer with a small range so if you cleverly lay out your tiles in a tile atlas, no conversion will be needed.
I looked at your source code and yes, you need to learn how to use structs and arrays. There are pleny of tutorial web sites or books on C that can help you get started but here a few untested ideas (in C, not C++) you could use and complete:
Edgar, I don't think you can set the bitmaps this way because it would call al_load_bitmap() too early.
A workaround is to add the file names in the struct array, and keep the pointer to bitmap at NULL initially. After allegro initialization (and graphic mode selection, I think) you can loop on the array:
for(i = 0; i < sizeof(tileinfo)/sizeof(tileinfo[0]); i++) tileinfo[i].tile = al_load_bitmap(tileinfo[i].filename};
beoran i at least sort of know what structs are, but when it comes to enum i am very lost and have no idea how to use those.
one of the biggest things that i need help with for structs is how once i have created all the item structures and what not do i then make the program read every single struct without me having to specifically write them all out the way i currently have it.
and also another thing i will add is for the tiles i would prefer to just have all the tiles in one .png this is what i made a good while back. i started working with allegro about 2 months ago but got very frustrated because i broke off more then i knew how to do in c++ let alone allegro. but this is the tile sheet i made. it was a very rough drawing and i will likely redraw it for the specific need of this current game..
if i do something like this. i dont see how to tell what item the character has. if the player has a sword is there a way of doing something like get item.id and then it will run through all the item1.id item2.id and so on and see what matches and then tell you that if item.id = item1.id then you have a sword.
#include <iostream>
using namespace std;
char end[5];
struct item
{
string name;
int id;
};
int main()
{
item item1;
item item2;
item1.name = "sword";
item2.name = "ax";
item1.id = 1;
item2.id = 2;
cout << item1.name << " " << item1.id << endl << endl << item2.name << " " << item2.id;
cin >> end[5];
return 0;
}
In C, an enum is just a way to define several integer constants. Look it up, it's
worth learning!
Once you use arrays of structs as I suggested, you can start making loops an lookups to do things, in stead of hard-coding everything. It's hard to explain in short, I suggest you learn more about the C language to better understand how this would work.
It's certainly possible to use a single bitmap for the tiles (this is called a "tile atlas"). You need to use either sub-bitmaps or Alegro's more complex bitmap drawing functions that select the part of the single bitmap you want to display.
As for seeing which item a player has, withc what I suggested, you would look up the item's id in the item array, and then you know which name to display, which effects it has, etc. What an item is and which items the player has are two different things and should be stored in different variables.
so it would be something more or less like
the item is 1
item is sword
the player has 1
if item is = item has
player has sword
that way they are separate?
Does anyone know a good book that i can buy to help me? might as well go with something based on game programming as that is what i am working on right now. but even if not. just a good book in general for learning c++ would be nice.
Audric has a good point. I'm just used to C++ where you can declare anything you want anywhere.
That would be a better method..
That's the thing, they are a 2D array. map[row][column] gives you the char at column,row (x,y). You just index it by row first. You can make a 2D array of char like this :
char map1[][] = { {'#','#','#'}, {'#','W','#'}, {'#','+','#'}, {'#','#','#'} };
But notice doing it this way you have to enclose everything in single quotes ''. Because they are char literals. A literal is a predefined string or data.
so would it be easier the way i have my maps set up just to have individual files for each tile?
You can just change your TILEINFO struct to accept x,y,w,h members to store the location of the tile on your atlas. | https://www.allegro.cc/forums/print-thread/614571 | CC-MAIN-2018-05 | refinedweb | 1,293 | 78.89 |
- DiskeeperServer - installation operation failed ??
- Error creating a dataset
- Current Application Directory
- COM+ Minutes to idle shutdown programmatically?
- VB6 Migration
- ASPNET_WP.EXE Memory usage HELP
- GAC ?
- C# and Context Sensitive Help
- Problem trying to create a component like the DataSet
- Can't create SQL Server database with Server Explorer?
- 1st ever problem installing Linksys wireless Media adapter s/w
- General Application Development
- C# class not implementing VB.NET interface
- error installing SQL Serve CE Server Tools
- Commandline to get Version number?
- Bug: handling value class events from another assembly
- C# + Outlook addin + Context menu functionalities
- VS2002 to 2003 Activator.GetObject Help!
- .NET book
- Error, plz help.
- .net queries
- Is this a bug??
- Syntax highlighting
- .Net and API Problem
- Long String-values in a datagrid are being cut off
- XmlReadWrite
- connecting to oracle
- Serializer
- I am unable to add a solution from source control into my system.
- Debugging Pocket PC's
- Setting the printer without using PrintDialog
- NET Compact Framework 1.1 ...
- Size of datacolumn
- Uninstalling a Windows Service on Win2003
- Asynchronous socket programming
- remoting in .NET
- Print pdf file in C#...need help, thanks!
- How to add extra property to DropDownList control?
- Three Tier Web Project Structure
- alternative to 'window.parent.frames'
- Multiple Datasets to one Datagrid
- .NET 2003 upgrade and linking erros
- Deseralizing RSS into a Class?
- Time to get rid of the registry
- PetShop 3.0 installation complication
- Positioning controls on form
- Dataset and datatables
- drag-and-drop Outlook message
- Compile Error
- Assignment operator for __value types
- Accessing a __value enum enumerated value from another assembly
- Garbage collection performance test app
- Project properties and missing release build
- printing to pdf
- Crystal Reports for .NET and Oracle problem - Logon failed
- Versioning problem
- .NET Alerts Compliance Criteria Confusion
- alignment of text
- how do I ensure only one instance of an application can execute concurrently
- Excel STILL won't .Quit()
- Modifiers Property of label
- user control (tabindex)
- Start Button moves
- web.config and HTTP code 401
- compilation error
- InvalidCastException from explicit casting object to a class
- C# Implementing Interface AND Inheriting MarshalByRefObject
- tom
- WebService – Returning an Object
- Printing of any type of file
- Fonts in .Net
- App not running when deployed to web server
- .NET Dependency Walker?
- Size of dataset column
- 98575716
- Debug manager service is disabled
- moving asp.net web applications from one server to another
- URGENT: Accessing registry key???
- The best C++ developing tool?
- Creating Appointments on Outlook using ASP.NET
- Form resizing randomly on a vb net project
- Uploading the Files size 0 - .net
- Remoting Component --> Com+ Component:
- Process class and main window handle property
- GUI question
- PIA office doesn't work
- SelectedIndexChanged
- Code Sharing Question
- URGENT : System areflexion assembly ???
- Help with Visual Basic.NET
- Windows Service Logout Behavior
- RSS Newsfeed Consumer in APS.NET
- system error
- DataSets and cs files
- Is there a way to validate xml string? thanks!
- Isolated Storage Space in Internet Permission Set
- Writing to a remote Registry key
- Microsoft Enterprise Instrumentation Framework
- Trap cleint side events in treeview webcontrol with autopostback = false
- Correction to last post
- Error deserializing: The object with ID 31 implements the IObjectReference interface for which all dependencies cannot be resolved.
- Application.Run without passing first form as argument
- ASP.Net Variables
- app.config of the app started with Process.Start
- mshtml/axshdocvw & popup
- OpenFileDialog control
- C# Closing login form when main form opens
- Windows 2000 Server vs. Professional
- Please Help Me Fix This Parser Erro Message
- Displaying current time in a status bar panel
- What are all of the vs7jit command line args?
- Windows Update regarding KB832483
- Sending string to printer
- Upvraging from VS6 to VS.net
- Error Please HELP
- DataSet - adding new row
- ado.net-stored procedure - null value in parameter question
- save expression
- Client (exe) calling Business layer (dll) on another machine
- msn6.exe
- 2002 to 2003 Enterprise Templates Help
- Graph in .aspx page
- AxWebBrowser.EndInit() causes delay !
- DDML examples or documentation anyone?
- Title Bar/Caption Bar Color
- Global .NET objects in IIS issues
- dot net 1.1 redistributable install
- I created a complete rich text .NET user control with HTMLarea
- Copy/Cut/Paste using active textbox
- CollectionBase OnInsert
- Problem with MdiChil form Maximized
- adding txtbx dynamically , place holders, &formatting
- arabic characters
- Getting HTML from Selected Text of WebBrowser Control
- Connection string for text files
- Issue of reference parameter
- Phone Dialer
- Auto Increment Problem with Dataset
- How to install custom.NET CLR unhandled exception handler.
- File Type?
- OleDb Parameters BUG or BAD DESIGN
- Extract Excel Formula Functions
- test.provider load failure
- Help Needed : Error - Cannot create CDO.Message object
- Which book(s) should I get?
- Crystal Report & Application variable
- Crystal Report & Application variable
- Deploying a windows forms application via AD Grop policy
- Calling .NET from VB
- trouble installing Visual Studio.NET 2003 60 days trail version
- Inheritance and Array's
- flicker problem
- User Control Help
- How can I kill a process by using C#.NET Application?
- VB.NET Property window
- Please Help : .NET 1.1 Runtime can't install on a IBM MicroDrive HD ??? (Removable HD)
- Cristal reports problem
- Using Macros in .Net 2003 Development Environment
- Windows Counts with C#
- How to print a pdf file?
- Application Center Test (ACT) Request.ResponseBufferSize - Binary Data
- isnumeric equivilent in .net
- Excel automation from ASP.NET C# app - got a problem...
- Accessing com lines in .NET
- C# Syntax
- User's Group
- Left Hand Menu Used in MSDN
- Difference between Debug & Release
- Visual Studio Trial - need input!
- .Net service database problem
- mdi and splash screen
- Active Directory and ADO.NET
- Folder Permissions
- File Watcher
- adding values to a Datagrid
- DLL
- how can i use c# dll with namespace in c++
- MDI Child Question
- Development -> Live site publishing
- How to use Jscript with VB
- Custom Exceptions with Web Services
- Crystal Reports 'field explorer' not refreshing.
- how do i manage my form instances
- com+ component and Component Service
- printing blanks??
- Double Backtick quotation characters generated by VisualStudio.net IDE
- error message with datatable
- Can't start timer from Visio event handler
- Recommend Modeling Software
- HLP file
- How to get contents out of datagrid
- versioning
- ASP.NET events firing twice
- Datagrid
- Remoting or Messaging
- Concurrency problem using Unmanged code (COM+ dll)
- C# Databinding Issue
- .Net minimum Hardware requirements.
- Referencr VB6 App to get control value
- winform timers
- private variable vs. actual property in a class module?
- Datagrid buttons
- User Account
- wanted: solution for integrating help tooltips from a compiled html file.
- VS.Net 2002
- Copy - Paste - Cut commands using VS.NET
- Web serveces?
- Inherited form
- VS.net and VSS
- DataTable.AcceptChanges
- Deploying an application that uses System.Web.Mail.SmtpMail
- How to encrypt to xxxx-xxxx-xxxx-xxxx
- I need help with visual.NET 2003 setup
- Crystal Report Problem
- URGENT :Strange missing assembly reference ???
- How to allow a null value in CMAB (Configuration Management Application Block)
- ToolTips Disappeared
- Export data from grid to Excel
- Q: About Montors (screens) vs. Applications
- At Least One Stock is Worse than MSFT
- Fill() problem in VS .NET 2003 in Windows application C# with SQL Server
- synclock performance
- Obj reference not set...
- Create File in memory...
- intercept "Open" event for any files in a given folder
- Remoting and Messaging
- Javascript popup questions...
- Which is faster, remoting or object pooling (COM+)
- Norton says devenv.exe is stopped for malicious script?
- Object references in different form classes
- Please Help
- URGENT - .NET surveys / statistics
- What is this error? Please Help
- Unresolved token error: LNK 2020
- Post XML file
- Install Windows Service
- OnInit(System.EventArgs)': no suitable method found to override
- Unable to create web project using visual studio
- CollectionBase Remove method - still trying
- How To: Call & use a Dot Net Dll in old ASP?
- 3rd Party WinForms Docking Windows Component, need recommendation
- installed vc++.net but have
- Datagird + Dataset + Dataview Problem
- Combobox not diplaying properly!!
- Date vs. Datetime element in Dataset
- How to retrieve the changed value of a reference parameter
- Deploying crystal reports with vb.net application
- how to use ParameterModifier
- ACCESS Upgrade from 97 TO 2003
- What happens on DataRow.EndEdit
- Interrupts in VB (API)
- Dataset problems
- File Watcher + Win Service
- demon (stand alone executable) question
- Datatype issue
- Drawing strings
- Extend the number of characters in a column
- OpenDialog -> Too many files selected
- 1.0 and 1.1 ASP.NET coexistance
- Microsoft.ApplicationBlocks.Logging
- BUG + FEATURE REQUEST: FolderBrowserDialog = broke and useless
- MS Word functionality in VB
- Project Version in Visual Studio
- issue of Type.InvokeMember
- Remoting -how to Pass a bitmap or binary data to Remot method
- MailMessage class
- Download Platform SDK
- <whine>Why is DotNet so difficult to work with?</whine>
- Serialization
- Serialization
- Installation of VS.NET
- Access Crystal reports from C#.NET
- Coding convention for C#
- Object Oriented Inheritance
- Can't find csc.exe
- IO Completion Ports - VS C++ 6/VS .NET C++/C#
- IO Completion Ports - VS C++ 6/VS .NET C++/C#
- VB Generic Event Handlers
- passing variables problem in ASP.NET
- Dropdownlist box
- Master Detail Question
- MIME decoding
- remote debbuging in VS NET 2003
- Accelerated Application Integration and Information Sharing - Webinar this Tuesday
- How do I pass more than one arguement to a function using "<asp:LinkButton> ?
- Single dll or mutli dlls? thanks.
- Datagrid scrolling
- Understand: Limewire Will Be The Boss of Google
- OSS Benchmarks, Test 2
- conditional c#
- Assembly loading failure
- I am at a loss, please help with .net IDE error. Please Help
- Excel OLE linking
- Scanners, twain, wia other?
- Overriding properties with Reflection.Emit
- test bool value
- How to connect to a secure sokect from c#/.NET (not using TcpClient or HTTPWebRequest)
- How do I call a function from an <asp:LinkButton> ?
- Perhaps I've got it all wrong (System.InvalidCastException: QueryInterface for interface...failed)
- Shared memory in .net
- VS.NET 2003 Setup: Getting 1317 error
- Cannot Change threading mode
- dotfuscator problem
- empty dataset
- Setting up a project dependancy
- combobox.items question
- visual basic sounds
- How do I call a function from a hyperlink in C# ?
- Installing visual studio.NET 2003 from harddisk
- Recognize Changes to DataSet
- XCopy in .NET
- Any native .NET browser implementations?
- Installation problems
- CollectionBase RemoveAt bug?
- Newsgroup for Updater Application Block?
- CollectionBase bug?
- debug into GAC dll
- project layout
- How to use NUnit in multi-developer environment.
- Windows service config file
- Word Automation - Setting ActivePrinter changes System Default
- Cannot connect to ANY database
- error LNK2019 and fatal error LNK1120
- Unable to connect to ANY database
- SDK for SAML
- Identifying what exceptions a class can through
- Adding one then one Icon to VB Application
- VB.Net verses ASP.Net
- Debugging VB.NET displays ? on Breakpoints
- .Net for Linux - Roomers?
- How to trouble shoot a "Application Exception"?
- ASP.NET (ASPNET) worker process
- Problem installing Windows Service
- Dataset
- Date Arithmetic
- Getting trouble when I am trying to create a ASP.net web application
- Namespace Help! Please!
- tried to post over ten times but to no avail....test
- owc problem
- owc10 and asp.net issues
- Unwanted multiple Tooltips displaying over control.
- ???????
- 'PriceCheck' is not defined
- Custom Performace Counters HELP
- Problem with OWC10, ASP.net and VB.net
- What is best CE.NET Remoting equivalent
- Compiling Problems
- How to get list of DSN's on client machine?
- Assembly References
- Managing the IDE Project List
- Deploying an ASP Application
- Error to export Crystal Report to .PDF format
- OWC10 issues....Option strict disallows late binding
- Threads & Program structure advice
- Application deployment question
- URGENT : GetTypes in assembly??
- Folder Access
- Problem with registering VB DLLs. Please help urgently....
- Small footprint XP app
- Retriving assembly attribute ??
- Unload a Form
- URGENT :system reflexion of assenbly question?
- Is "Whidbey" available to MSDN subscribers?
- Workaround: "Cannot copy assembly..." "Metadata file..."
- API list viewer for C#
- Serialization
- Deploy Additional Files with .NET
- VB interrupt functions.......
- Setting Color of Timedatepicker Control
- .NET Class in ASP 3.0
- Is it possible to create web setup projects with VC#.net standard?
- Is it possible to create web setup projects with VC#.net standard?
- Project management Software
- Length prefixed network message format...easy way to read this?
- Load Testing Client Server Application
- Safe Printing
- Help with asp.net and OWC10
- Variable declaration Query
- Installing a MS CMS demo site using VS/windows installer
- Gate and Relf Opens Mouth: Stock Sinks Again
- Recovery Soars: MSFT Sinks !
- Problem using System.Web.Mail
- Trusting a C++ Managed Application / Assembly.
- Grid/Combo load ..
- Deploying a Windows Forms App that uses Web Services.
- User control
- App VB6.0 in VB.Net
- Carriage Return in WebMethods
- Datagrid col width
- Datagrid col width
- Running .NET assembly from a network share
- Crystal Reports for VS.NET Globalization
- Listview column-header bug?
- Including usercontrols in the EXE-file
- Visual Studio .NET Toolbox disabled in all applications
- creating solutions whereever you want...
- Converting Projects
- Help!! progress bar code
- increasing the stack size
- Active process/window
- Reading data from a Oralce Advanced Queue
- VB.NET or C#
- serial port component
- Help with excel automation and OWC10
- Where i have to put the sub main??
- Userdefined icons on usercontrols
- Change default layout in VB.NET editor
- How to open a mpp stored in sqlserver database
- What version.
- crystal reports book
- using application centre test
- PROCESSOR USAGE CLIMBS TO 100%
- Form Parameter
- ERROR 995
- "Simple ?" beginners web form design question
- how to get the MAC address by .net program if the OS is win98?
- Deployment Issues
- How to create multiple instance of a class ???
- Easy question
- Form starts form = slow execution
- vb.net html form interaction without page reloads on script execution: how?
- ASP.NET Performance
- looking for vb.net library class CHM file
- Multiple File Uploads
- Connecting to SQL Server 6.5 using .net XML WebService
- tooltips delay not working!? - BUG??
- What's the point of Simple Data Binding?
- Tool to Diagram/document out Objects Classes and inheritance?
- Hiding IE browser toolbar with ASP.NET
- DataGrid Row Color
- Components in .Net
- Class Library
- creating a pop-up window in ASP.NET or C#
- Data retrival time and combo load
- VB.NET Conversion Error
- export a .NET executable?
- Tab Control Problem
- Access is denied: 'microsoft.web.ui.webcontrols'
- Print Preview embedded excel sheet in IE
- How-to Underline character (Shortcut) on Tabcontrol?
- Crystal Report.NET and Stored Procedure temp table
- Pluggable security mechanism?
- VB.NET Focus Bug
- Problems with Session variables
- Sending email via Exchange (Not SMTP)
- outlook email from a .net Web Form
- Losing intellisense on typed datasets
- Optional DEVPATH
- How private is the private key??
- Creating Virtual FTP Folder in ASP.NET
- CDO, C# and file-locking problem.
- Printer object replacement syntax
- Limitations of the .NET Microsoft Oracle provider
- Add-ins
- ASP Error
- How to Automate Excel From C#.NET??
- Java to .Net communication
- View Code instead of View Designer on doubleclick in Solution Explorer
- Few components not to be added to project
- "Object reference not set to an instance of an object"
- inline code
- script query
- page's lifecycle
- Transaction via couple of pages
- Excel 9.0 object library and windows 2003 server
- Logging
- Highlight a row in datagrid of a webform
- protected variables or protected accessors?
- beginner question on datagrid column haedings
- disabling tabpages swapping on a tabcontrol
- new features that the .NET Environment
- Writing XML file
- Application pool
- properties window does not appear ??!!
- Localization Problem
- iwhat should i do
- How to secure my DLLs
- GROUPBOX control
- Problem about uninstall a program shortcut
- threading - suspend, resume, abort
- Crash at Application.Run(Form)
- Stored Procedure creation in visualstudio.net :'The Operation could not be created'
- Possible Enterprise Localization Toolkit Bug?
- Best platform to run .net on
- How do I turn off connection pooling in a connection string
- Reply-To Field
- How?
- how to use file association?
- XmlReader Hangs
- Monitoring Shared Directories
- FileSystemWatcher Filter property
- GDI+ Text
- Funky error
- ??? | https://bytes.com/sitemap/f-312-p-204.html | CC-MAIN-2019-43 | refinedweb | 2,581 | 51.65 |
RE: protecting .NET assemblies from hackers
From: Allan (Allan_at_discussions.microsoft.com)
Date: 10/26/04
- ]
Date: Tue, 26 Oct 2004 13:59:20 -0700
of course, but that only exposes problems with .net updater. unless you are
not using that..
"Nate A" wrote:
> I guess I pretty much figured out the solution to my own problem with some
> sifting though the .NET framework help regarding assembly signing. It turns
> out, luckily, that signing an assembly with a key pair generated by sn.exe
> will ensure that if a hacker modifies your .exe assembly, it will not load
> becuase the cryptographic hash will not be correct. I'm sure most of you were
> already aware of that so I apologize for the post ; )
>
> "Nate A" wrote:
>
> > I am at the beginning stages of writing a massive database-connected business
> > management application using the .NET framework and am becoming worried
> > about the security of the application upon completion.
> >
> > I have recently become aware of the ease at which a .NET assembly can be
> > disassembled into its easily readable, underlying CLI code. I can see that it
> > would not be difficult for a malicious user to disassemble, modify, and then
> > recompile in CLI byte code (using the included VS.NET tools). This concerns
> > me deeply since I can see how easy it would be to obtain critical information
> > within the code.
> >
> > I looked into code obfuscation tools such as DotFuscator. As far as I can
> > tell, these tools can only make your code harder to understand by renaming
> > CLI metadata to more or less random names, and optionally encrypting internal
> > strings (such as "salts" to use in encryption/decryption algorithms or
> > passwords used to access remote data, like a database server). Apparently
> > they can also slightly modify the way an algorithm operates to hide the
> > details of the algorithm while maintaining the true functionality of the
> > algorithm. However, algorithm hiding is not my big concern so that is
> > irrelevant.
> >
> > This, however, fails to put my mind at ease since much can be understood
> > about the code after disassembling an obfuscated assembly.
> >
> > For example, if one's application has a class containing methods for
> > encryption and decryption of data using the .NET Framework's "Cryptography"
> > namespace, a hacker needs only to look for classes that "Imports" the
> > Cryptography namespace, or that make calls to members of that namespace in
> > order to realize "hey, I bet this class contains the functions used for this
> > applications encryption." The class may be named "a", with public members
> > "a", "b" and "c" by the obfuscator, but the hacker still knows that members
> > "a", "b" and "c" probably do encryption and decryption.
> >
> > So now let me get to a particular concern of mine dealing with my
> > application and see if anyone has any suggestions.
> >
> > My application connects to a remote database, so let’s say a hacker wants to
> > cop the database password from my app. He knows there must be a database
> > password stored somewhere in the application code, registry, or an external
> > settings file. WHERE it is stored is more or less irrelevant since it won’t
> > be hard to find it either way. I happen to store it in a XML settings file.
> > Of course the password is encrypted in the file, but once the hacker finds
> > the encrypted password string, he knows that at some point in the
> > application, the string will be decrypted when it needs to be sent to the
> > database server to log onto the database.
> >
> > So once he finds the CLI code in the assembly where the encrypted password
> > is fetched from the settings file, pushed onto the stack, and then a call is
> > made to a method in the suspected encryption/decryption class, he has now
> > figured out the method that decrypts the password and can use this to wreak
> > havoc on my app.
> >
> > It seems to me that all the programmer has to do at this point to get the
> > decrypted password is add a little CLI code after the point where the
> > password is decrypted. I don't know much of the specifics of the CLI
> > language, but after inspection of my disassembled code, the hacker could add
> > something like:
> >
> > //push the decrypted string onto the stack
> > ldstr "the decrypted password string returned by the 'secret' decryption
> > function"
> >
> > //call the visual basic "messagebox" method to show him the decrypted string
> > call [Microsoft.VisualBasic]Microsoft.VisualBasic.Interaction::MsgBox(object)
> >
> > Boom, there it is, the database password shown to the hacker in a MsgBox! He
> > now has free reign to log into my database and delete records or replace all
> > credit card numbers with "suck it!" if he wants (or whatever it is these guys
> > like to do BESIDES getting laid!)
> >
> > So the only thing I can see that would almost guarantee that a hacker could
> > not do this would be by not allowing him to modify the code, like having the
> > program detect if it was modified before it was run. I'm not aware of any way
> > to do this that is built into the .NET Framework, but if this exists, maybe
> > someone can let me know.
> >
> > I also considered the possibility of calculating the .exe file's checksum,
> > sending it along with the application in some form or another, and then
> > having the application calculate it's own checksum each time it's run, and
> > check it against the stored value and throw an error if they do not match. (I
> > was hoping that the .NET framework had this kind of security built in, but I
> > haven't come across it yet.) Has anyone ever tried this? Or can anyone think
> > of some pitfalls of this method?
> >
> > So anyway, I hope this post will catch the eye of someone who knows more
> > about these kinds of things than me and maybe they can point me in the right
> > direction on how to secure my code considering these issues mentioned above.
> >
> > Thanks for taking the time to read this.
> > -Nate A
- ] | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.general/2004-10/2757.html | crawl-002 | refinedweb | 1,003 | 57.91 |
Native Image Generation (NGen)
As we've already discussed, the CLR does not interpret IL. It uses a Just in Time (JIT) compiler to compile IL to native code at runtime, inserting hooks into the various CLR services in the process. This process was discussed in Chapter 2. But clearly compiling managed code into native code at runtime has some costs associated with it. For certain classes of applications, startup time is crucial. Client applications are great examples of this, because there's a user sitting at the computer waiting for the application to become responsive. The more time spent jitting code up front, the more time the user must wait. For situations in which this is a problem, the CLR offers ahead-of-time JIT compilation using a technology called NGen.
The ngen.exe utility, located in your Framework directory (i.e., \WINDOWS\Microsoft.NET\Framework\v2.0.50727\), enables you to perform this ahead-of-time compilation on the client machine. The result of this operation is stored in a central location on the machine called the Native Image Cache. The loader knows to look here when loading an assembly or DLL that has a strong name. All of the .NET Framework's assemblies are NGen'd during install of the Framework itself.
In version 2.0 of the Framework, a new NGen Windows Service has been added to take care of queuing and managing NGen compilations in the background. This means your program can install, add a request to the NGen queue, and then exit installation. The NGen service will then take care of compilation asynchronously. This can reduce quite noticeably the install time for your program. Right after an install of a new component, however, there's a window of time where the NGen image might not be ready yet, and thus could be subject to jitting.
NGen uses the same code generation techniques that the CLR's JIT uses to generate native code. As discussed briefly in the section on the JIT in Chapter 2, the code that is generated is designed to take advantage of the underlying computer architecture. Subtle differences in chip capabilities will make an NGen image unusable across machines. Thus, image generation must occur on the client machine as part of install (or postinstall) rather than being done in a lab before packaging and shipping your program. Thankfully, the CLR notices this at load time and will fall back to runtime JIT if necessary.
If you determine NGen is right for you — performance testing should determine this choice — you'll want to run it against your application's EXE. Doing this will cause NGen to traverse your application's dependencies, generate code for each, and store the image in the Native Image Cache alongside your program's. If any of your program's dependencies is missing a native image, the CLR loader won't be able to load your image and will end up jitting instead.
Managing the Cache (ngen.exe)
The ngen.exe tool has quite a few switches to control behavior. We'll briefly look at the most common activities you'll want to perform. Running ngen.exe /? at the command prompt will show detailed usage information for the tool. The Windows Service that takes care of managing and executing queued activities is called ".NET Runtime Optimization Service v2.0.50727_<Processor>," and can be found in your computer's Administrative Tools\Services menu.
Here is a brief summary of operating NGen:
Install: Running ngen install foo.dll will JIT compile and install the images for foo.dll and its dependencies into the Native Image Cache. If dependencies already exist in the cache, those will be reused instead of regenerating them. You can specify /queue:n at the end of the command, where n is 1, 2, or 3 (e.g., ngen install foo.dll /queue:2). This takes advantage of the Windows Service to queue the activity for background execution instead of executing it immediately. The scheduler will execute tasks in priority order, where 1 is the highest-priority task, and 3 is the lowest-priority task.
Uninstall: To completely remove the image from the Native Image Cache for (say) foo.dll, you can run ngen uninstall foo.dll.
Display: Typing ngen display foo.dll will show you the image status for foo.dll, such as whether it's available or enqueued for generation. Executing ngen display by itself will show a listing of the entire Native Image Cache's contents.
Update: Executing ngen update will update any native images that have been invalidated due to a change in an assembly or one of its dependencies. Specifying /queue at the end, for example ngen update /queue, schedules the activity rather than performing it synchronously.
Controlling background execution: Running ngen queue [pause|continue|status] enables you to manage the queue from the command line by pausing, continuing, or simply enquiring about its status.
Manually executed queued items: You can synchronously perform some or all of the queued work items by invoking ngen executeQueuedItems and optionally passing a priority of either 1, 2, or 3. If a priority is supplied, any lesser-priority items are not executed. Otherwise, all items are executed sequentially.
For detailed usage information, please consult the Microsoft .NET Framework SDK.
Base Addresses and Fix-Ups
A process on Windows has a large contiguous address space which, on 32-bit systems, simply means a range of numbers from (0x00000000 through 0xffffffff, assuming /3GB is off). All images get loaded and laid out at a specific address within this address space. Images contain references to memory addresses in order to interoperate with other parts of the image, for example making function calls (e.g., call 0x71cb0000), loading data (e.g., mov ecx,0x71cb00aa), and so on. Such references are emitted as absolute addresses to eliminate the need for address arithmetic at runtime — for example, calculating addresses using offsets relative to a base address — making operations very fast. Furthermore, this practice enables physical page sharing across processes, reducing overall system memory pressure.
To do this, images must request that the loader place them at a specific address in the address space each time they get loaded. They can then make the assumption that this request was granted, burning absolute addresses that are calculated at compile time based on this address. This is called an image's base address. Images that get to load at their preferred base address enjoy the benefits of absolute addressing and code sharing listed above.
Most developers never think about base addresses seriously. The .NET Framework team certainly does. And any team developing robust, large-scale libraries who wants to achieve the best possible startup time should do the same. Consider what happens if you don't specify the base address at all. Another assembly that didn't have a base address might get loaded first. And then your assembly will try to load at the same address, fail, and then have to fix-up and relocate any absolute memory addresses based on the actual load address. This is all done at startup time and is called rebasing.
The base address for an image is embedded in the PE file as part of its header. You can specify a preferred base address with the C# compiler using the /baseaddress:<xxx> switch. Each compiler offers its own switch to emit this information in the resulting PE file. For example, ilasm.exe permits you to embed an .imagebase directive in the textual IL to indicate a base address.
Clearly, two assemblies can still ask for the same base address. And if this occurs, your assembly will still have to pay the price for rebasing at startup. Large companies typically use static analysis to identify overlaps between addresses and intelligently level the base addresses to avoid rebasing. The Platform SDK ships with a tool called ReBase.exe that enables you to inspect and modify base addresses for a group of DLLs to be loaded in the same process.
Hard Binding
Even in the case of ahead-of-time generated native images, some indirection and back-patching is still necessary. All accesses to dependent code and data structures in other assemblies still goes indirectly through the CLR, which looks up the actual virtual addresses and back-patches the references. This is done through very small, hand-tuned stubs of CLR code, but nonetheless adds an extra indirection for the first accesses. A consequence of this is that the CLR must mark pages as writable in order to perform the back-patching, which ends up reducing the amount of sharing and increasing the private pages in your application. We've already discussed why this is bad (above).
NGen 2.0 offers a feature called hard binding to eliminate this cost. You should only consider hard binding if you've encountered cases where this is a problem based on your targets and measurements. For example, if you've debugged your private page footprint and determined that this is the cause, only then should you turn on hard binding. Turning it on can actually harm the performance of your application, because it bundles more native code together so that absolute virtual addresses can be used instead of stubs. The result is that more code needs to be loaded at startup time. And base addresses with hard-bound code must be chosen carefully; with more code, rebasing is substantially costlier.
To turn on hard binding, you can hint to NGen that you'd like to use it via the DependencyAttribute and DefaultDependencyAttribute, both located in the System.Runtime.CompilerServices namespace. DependencyAttribute is used to specify that an assembly specifically depends on another. For example, if your assembly Foo.dll depends on Bar.dll and Baz.dll, you can mark this using the assembly-wide DependencyAttribute attribute:
using System.Runtime.CompilerServices; [assembly: Dependency("Bar", LoadHint.Always)] [assembly: Dependency("Baz", LoadHint.Sometimes)] class Foo { /*...*/ }
Alternatively, you may use DefaultDependencyAttribute to specify the default NGen policy for assemblies that depend on the assembly annotated with this attribute. For example, if you have a shared assembly which will be used heavily from all of your applications, you might want to use it:
using System.Runtime.CompilerServices; [assembly: DefaultDependency(LoadHint.Always)] class Baz { /*...*/ }
The LoadHint specifies how frequently the dependency will be loaded from calling assembly. Today, NGen does not turn on hard binding except for assemblies marked LoadHint.Always. In the above example, this means Foo.dll will be hard bound to Bar.dll (because the association is marked as Always). Although Baz.dll has a default of Always (which means assemblies will ordinarily be hard-bound to it), Foo.dll overrides this with Sometimes, meaning that it will not be hard bound.
String Freezing
Normally, NGen images will create strings on the GC heap using the assembly string table, as is the case with ordinary assemblies. String freezing, however, results in a special string GC segment that contains all of your assembly's strings. These can then be referenced directly by the resulting image, requiring fewer fix-ups and back-patching at load time. As we've seen above, fewer fix-ups and back-patching marks less pages as writable and thus leads to a smaller number of private pages in your working set.
To apply string freezing, you must mark your assembly with the System.Runtime.CompilerServices .StringFreezingAttribute. It requires no arguments. Note: string freezing is an NGen feature only; applying this attribute to an assembly that gets jitted has no effect.
using System; using System.Runtime.CompilerServices; [assembly: StringFreezing] class Program { /*... */ }
One downside to turning string freezing on is that an assembly participating in freezing cannot be unloaded from a process. Thus, you should only turn this on for assemblies that are to be loaded and unloaded transiently throughout a program's execution. We discussed domain neutrality and assembly unloading earlier in this chapter, where similar considerations were contemplated.
Benefits and Disadvantages
NGen has the clear advantage that the CLR can execute code directly without requiring a JIT stub to first load and call into mscorjit.dll to generate the code. This can have substantial performance benefits for your application. The time savings for the CLR to actually load your program from scratch is usually not dramatic — that is, the cold boot time — because there is still validation and data structure preparation performed by the runtime. But because the native images are loaded into memory more efficiently (assuming no fix-ups) and because code sharing is increased, warm boot time and working set can be substantially improved.
Furthermore, for short running programs, the cost of runtime JIT compilation can actually dominate the program's execution cost. In the very least, it may give the appearance of a sluggish startup (e.g., the time between a user clicking a shortcut to the point at which the WinForms UI shows up). In such cases, NGen can improve the user experience quite dramatically. For longer-running programs — such as ASP.NET web sites, for example — the cost of the JIT is often minimal compared to other startup and application logic. The added management and code unloading complexity associated with using NGen for ASP.NET scenarios means that you should seldom ever try to use the two in combination.
On the other hand, there are certainly some disadvantages to using NGen, not the least of which is the added complexity to your installation process. Worst of all, running ngen.exe across an entire assembly and its dependencies is certainly not a quick operation. When you install the .NET Framework redistributable package, you'll probably notice a large portion of the time is spent "Generating native images." That's NGen working its magic. In 2.0, this is substantially improved as a result of the new Windows Service that performs compilation in the background.
To actually invoke ngen.exe for manual or scheduled JIT compilation also unfortunately requires Administrator access on the client's machine. This can be an adoption blocker in its own right. You can certainly detect this in your install script and notify the user that, for optimized execution time, they should run a utility as Administrator to schedule the NGen activity. Images generated by Administrator accounts can still be used by other user accounts.
NGen images can also get invalidated quite easily. Because NGen makes a lot of optimizations that create cross-assembly interdependencies — for example, cross-assembly inlining and especially in the case of hard binding — once a dependency changes, the NGen image will become invalid. This means that the CLR will notice this inconsistency and resort back to a JIT-at-runtime means of execution. In 2.0, invalidations occur less frequently — the infrastructure has been optimized to prevent them to as great an extent as possible — and the new NGen Windows service may be used to schedule re-NGen activities in the background whenever an image is invalidated.
-
- Comment | http://codeidol.com/csharp/net-framework/Assemblies,-Loading,-and-Deployment/Native-Image-Generation-(NGen)/ | CC-MAIN-2015-48 | refinedweb | 2,497 | 56.35 |
Okay, help me understand whats going on here.
Hope I've made my confusion(s) clear. I've got a hunch I'm looking at something in a very wrong way. Thanks in advance!Hope I've made my confusion(s) clear. I've got a hunch I'm looking at something in a very wrong way. Thanks in advance!Code:#include <stdio.h> int foo (int q) { /* so here we're declaring a function called foo which returns an integer. (int q) means that our foo function will need a variable and we're telling it that variable is q. could also have been written as just (q) if q was defined earlier (i think?). Heres my first question - whats the value of q? does it just default to 0 since we never define it as any specific value? */ int y = 1; printf ("q=%i\n", q); return (q + y); /* so this means that our function foo will return the value (q+y) which is at this point (0+1) so the value of foo is 1. how would I get it to print the value of foo? printf ("foo=%i", foo) gave me a big ol long number and I'm assuming thats because theres a different syntax for printf-ing the value of a function? */ } int main (){ int x = 0; while (x < 3) { printf ("x=%i\n", x); x = x + foo(x); /* here's where I get really lost. so this is executed before foo, why is that? is main just the first function executed no matter what? Now, at the end of this, x is changed to be 0+foo(x). So is foo(x) the value foo returns if its executed with x as the variable instead of q? In that case it would be the same thing since q and x were both 0, so foo(x) is 1, and x is now 0+1=1. now it loops back to the top and prints q=0. How does that work? why does it go back to the top of the code instead of continuing the while loop? Then goes back down to main, sees that x is still less than 3, so prints x=1. now back up to foo(?) and prints q=1. why did it go back up to foo again and how did q become 1? Now back down to main and apparently sees that x is no longer less than 3 because it stops and I am THOROUGHLY confused.*/ } } | https://cboard.cprogramming.com/c-programming/178180-should-easy-one-yall-beginner-post1290321.html?s=625ec1c961fa57f358dea7a13ff961b3 | CC-MAIN-2020-16 | refinedweb | 420 | 91.82 |
Please see my previous blog here to get up to speed. You should have serial access to the micropython prompt at this point. Connect to your board and run the following code in the REPL to setup our string of lights. For this example, I'm using a 1 meter long string with 144 lights.
from machine import Pin from neopixel import NeoPixel lights = NeoPixel(Pin(5), 144)
Now we have an instance of NeoPixel we can manipulate. Run the following to make sure your lights work. Warning. If you have too many lights connected, this is where you will start to have problems. Turning all the lights on at full brightness will draw the most current. Be careful
lights[0] = (50,50,50) lights.write()
This turns the first light on and you should see a white light. The syntax is the same for the rest of the lights.
lights[index of the light you want] = (red, green, blue) intensity
Now, let's light all the lights in the string. The max value for each color is 256, but the color intensity is inverse to the intensity. Numbers less than 50 will give good results most of the time.
for i in range(144): lights[i] = (20,20,20) # not too bright lights.write()
This will draw each light as the loop progresses and you can watch the string of lights fill in, or you can wait until the loop is done and then call
lights.write() to draw the string all at once. The latter is much faster.
Time to get some random color involved. Try this.
from random import getrandbits for i in range(144): lights[i] = (getrandbits(3), getrandbits(3), getrandbits(3)) # numbers higher than 4 lead to lots of white lights lights.write()
Similar to before, but every light should be a random color now.
Sticking with the random color theme this will assign a random color to the string and update the entire row 10 times a second. Simple and easy effect.
from utime import sleep try: while 1: for i in range(144): lights[i] = (getrandbits(3), getrandbits(3), getrandbits(3)) lights.write() sleep(.1) except KeyboardInterrupt: print('Done')
You can also move the
lights.write() command into the loop for a slightly different effect.
try: while 1: for i in range(144): lights[i] = (getrandbits(3), getrandbits(3), getrandbits(3)) lights.write() sleep(.1) except KeyboardInterrupt: print('Done')
This will sweep new colors down the string for an interesting effect. Moving the
sleep(.1) method and changing its duration will have interesting effects as well.
What to expect in part 3
In the final chapter, we are going to make a simple web application to control the lights from anywhere. The WiFi stack on the ESP8266 will make this pretty easy. We will also be making custom firmware with the environment we set up in the first chapter. Once it's complete, the ESP8266 will allow control of the lights over WiFi. Stay Tuned.
Top comments (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/technicholy/how-to-control-addressable-rgb-lights-with-a-esp8266-part-2-of-3-2gi4 | CC-MAIN-2022-40 | refinedweb | 505 | 74.9 |
API viscosity measures the resistance to change of code written using a particular API. For each goal that the user might want to accomplish with the API, describe how easy it is to make changes to the code required to accomplish that goal. In particular, consider situations where the user has a choice between two or more alternatives for accomplishing a particular goal. Describe how easy it is to change code that uses one approach to using another.
It’s important to describe these changes from the developer’s perspective. Think of the changes that developers might expect to make in terms of the different scenarios that the API supports, rather than the support that the API provides for modification.
You should also think of the 'domino' effect of any change. Some changes might require the user to make additional changes elsewhere in their code that uses your API, which in turn require other changes, etc etc...
In the System.IO namespace, one goal might be to write code that appends text to a text file. There are at least two different approaches that might be taken to accomplish this goal. The first approach might be to use the StreamWriter class exclusively:
StreamWriter sw = new StreamWriter("C:\test.txt", true);
sw.WriteLine("Text to append to test.txt");
sw.Close();
Another approach might be to use the File class to create the instance of the StreamWriter instead of calling the StreamWriter constructor. The changes required are simply to change the first line of code above to:
StreamWriter sw = File.AppendText("C:\\test.txt");
The remaining lines of code in the sample would be the same, since the only thing that has changed is the way that the instance of the StreamWriter is created.
It’s important to consider other changes that users might reasonably be expected to make. For example, if the user wants to write to a different file they only need to change the path to the file that is passed in as a string whenever the StreamWriter is created. On the other hand, if a user uses the File class to create the StreamWriter then decides that they want to modify the encoding that the output is written, they will need to modify the creation of the StreamWriter to use the StreamWriter constructor and pass the encoding they wish to use in the constructor.
API Viscosity can be defined in the following terms. | http://blogs.msdn.com/b/stevencl/archive/2004/03/10/87652.aspx | CC-MAIN-2015-06 | refinedweb | 406 | 69.52 |
Environment: Any modern C++ compiler
Purpose
The CEditDist class performs edit distance calculations on abstract data types. The edit distance is defined as the minimum cost required to convert one string into another, where the conversion can include changing one character to another, deleting characters and inserting characters, with user-defined costs for each basic operation. The algorithm is O(nm) where n and m are the lengths of the two strings compared (this is the fastest known algorithm). Edit distance calculations are useful for finding the degree of similarity between strings, e.g. in non-exact database queries.
The package is implemented as an abstract template base class. The programmer derives from this base class in order to define the data types used and the cost functions. The package includes complete documentation and an example of a simple implementation.The package is implemented as an abstract template base class. The programmer derives from this base class in order to define the data types used and the cost functions. The package includes complete documentation and an example of a simple implementation.
Example
Overview:
This example will create a class that can be used to calculate the edit distance between two arrays of integers, according to the following cost function: Deleting or inserting an integer will have a cost of 5; Changing from one integer value to a different value will have a cost of 3.
Step 1: Instantiate the Base Class.
The instantiation must be explicitly performed because of the special treatment of abstract template classes in C++. In our case, we want a base class of integers, so we will use the following statement:
template class CEditDist<int>;
Step 2: Derive from CEditDist.
We must now create a derivation from the base class. This derivation will define the cost functions, which are virtual functions called from the base class during the edit distance calculation.
class CIntEditClass : public CEditDist<int> { public: int DeleteCost(const int& deleted, int x, int y) { return 5; }; int InsertCost(const int& inserted,int x, int y) { return 5; }; int ChangeCost(const int& from, const int& to, int x, int y) { return (from==to ? 0 : 3); }; };
Step 3: Using the Class.
To use the class, construct an object from the base class and call its EditDistance function, like so:
CIntEditDist ed; int a[7] = {1,1,2,3,1,2,2}; int b[6] = {1,2,2,3,1,1}; int c = ed.EditDistance(a,7,b,6);
In the above example, the edit distance stored in variable c should be 11. This is because the least expensive edit replaces the second character in a from 1 to 2, replaces the sixth character from 2 to 1, and deletes the last character in a, for a total of two changes and one deletion, with a cost of 3+3+5=11.
DownloadsDownload source - 2 Kb
EditDist classes and documentation (external link) - 27Kb | http://mobile.codeguru.com/cpp/cpp/cpp_mfc/article.php/c791/CEditDist-Abstract-Template-Class-for-Edit-Distance-Calculation-on-Generic-Data-Types.htm | CC-MAIN-2017-43 | refinedweb | 486 | 51.48 |
Now that the client machines have been initialized, you can change any of them to NIS+ servers of the following types:
To be root replicas--to contain copies of the NIS+ tables that reside on the root master server
To be master servers of subdomains of the root domain
To be replicas of master servers of subdomains of the root domain
You can have only one NIS+ master root server. Root NIS+ servers are a special type of NIS+ server. This section does not describe how to configure a root master server; see "Setting Up NIS+ Root Servers" for more information.
You can configure servers any of these different ways:
Without NIS compatibility
With NIS compatibility
With NIS compatibility and DNS forwarding--you only need to set DNS forwarding if you are going to have SunOS 4.x clients in your NIS+ namespace (see NIS+ Transition Guide for more information on using NIS-compatibility mode)
Servers and their replicas should have the same NIS-compatibility settings. If they do not have the same settings, a client that needs NIS compatibility set to receive network information may not be able to receive it if either the server or replica it needs is unavailable.
This example shows the machine client1 being changed to a server. This procedure uses the NIS+ rpc.nisd command instead of an NIS+ script.
Before you can run rpc.nisd:
The domain must have already been configured and its master server must be running.
The master server of the domain's tables must be populated. (At a minimum, the hosts table must have an entry for the new client machine.)
You must have initialized the client machine in the domain.
You must be logged in as root on the client machine. In this example, the client machine is named client1.
Optionally, if using DES authentication, the client machine must use the same Diffie-Hellman key configuration as that used on the master server.
You need the superuser password of the client that you will convert into a server.
Perform any of the following to alternate procedures to configure a client as a server. These procedures create a directory with the same name as the server and create the server's initialization files which are placed in /var/nis.
All servers in the same domain must have the same NIS-compatibility setting. For example, if the master server is NIS compatible, then its replicas should also be NIS compatible.
To configure a server without NIS compatibility, enter the following command:
Edit the /etc/init.d/rpc file on the server to uncomment the whole line containing the string -EMULYP="-Y".
To do this, remove the # character from the beginning of the line.
Type the following as superuser.
This procedure configures a NIS+ server with both DNS forwarding and NIS+ compatibility. Both of these features are needed to support SunOS 4.x clients.
Edit the /etc/init.d/rpc file on the server to uncomment the whole line containing the string EMULYP="-Y".
To do this, remove the # character from the beginning of the line.
Add -B to the above line inside the quotes.
The line should read:
Type the following command as superuser.
Now this server is ready to be designated a master or replica of a domain.
Repeat the preceding client-to-server conversion procedure on as many client machines as you like.
The sample NIS+ domain described in this chapter assumes that you will convert three clients to servers. You will then configure one of the servers as a root replica, another as a master of a new subdomain, and the third as a replica of the master of the new subdomain. | http://docs.oracle.com/cd/E19455-01/806-1386/6jam5ahlb/index.html | CC-MAIN-2015-06 | refinedweb | 615 | 63.7 |
how to XOR
On 19/04/2014 at 07:09, xxxxxxxx wrote:
I can XOR a vector
pxvec = pxvec ^ vecmask
how do I XOR a single axis
pxvec.x = pxvec.x ???? vecmask.x
tia
On 19/04/2014 at 09:42, xxxxxxxx wrote:
pxvec.x = pcvec.x * vecmask.x
Vector XOR just multiplies componentwise.
-Niklas
On 22/04/2014 at 04:35, xxxxxxxx wrote:
thanks
I couldn't make sense of the SDK syntax '__rxor__(self,other)'
or the example
I thought % was a modulo operator?
Vector.\__rxor\_\_(self, other)[]()
Multiplies two vectors together componentwise and set the left hand vector to the result:
import c4d v_result = c4d.Vector(1,2,3)%c4d.Vector(2,3,4) #v_result => Vector(2,6,12)
Retu_<_t_>_
On 22/04/2014 at 09:09, xxxxxxxx wrote:
Well, that's a typo in the docs. The modulo operator performs the cross product.
Thanks for the notice.
-Niklas
On 23/04/2014 at 11:43, xxxxxxxx wrote:
So how do you form this with the example above pls?
can't figure the syntax
__rxor__(self,other)
xvec.x = pcvec.x__rxor__(vecmask.x)
tia
On 24/04/2014 at 03:15, xxxxxxxx wrote:
I don't get what you are trying to do. Vector.__rxor__() is only called in some special cases, see
If you want to multiply two vectors componentwise, use the ^ operator. The * operator on Vectors
performs the dot-product. If you want to multiply one component of a vector with another, use
the * operator. You can't multiply a vector component componentwise as there is only one component
which is the value itself.
-Niklas | https://plugincafe.maxon.net/topic/7827/10112_how-to-xor | CC-MAIN-2019-13 | refinedweb | 274 | 65.22 |
This page uses content from Wikipedia and is licensed under CC BY-SA.
The user access level of editors affects their abilities to perform certain actions on Wikipedia; it depends on which rights (also called permissions, user groups, bits or flags) are assigned to accounts. This is determined by whether the editor is logged into an account, and whether the account has a sufficient age and number of edits for certain automatic rights, and what additional rights have been assigned manually to the account.
Everyone is able to read Wikipedia. Unless they are blocked, they may freely edit most pages without the need to be logged in. Being logged in gives users a number of advantages, such as having their public IP address hidden and the ability to track one's contributions. Additionally, once user accounts are more than a certain number of days old and have made more than a certain number of edits, they automatically become autoconfirmed, allowing the direct creation of articles, the ability to move pages, to edit semi-protected pages, and to upload files. Further access levels need to be assigned manually by a user with the appropriate authority. An editor with more experience and good standing can attempt to become an administrator (sysop), which provides a large number of advanced permissions. A number of other flags for specialized tasks are also available.
All visitors to the site, including unregistered users, are part of the '*' group, and all logged-in registered users are also part of the 'user' group. Users are automatically promoted into the autoconfirmed/confirmed users pseudo-group of established users when their account is more than four days old and has ten edits, and the 'extended confirmed' user group later on. have one or more rights assigned to them; for example the ipblock-exempt (IP block exemptions) group have the 'ipblock-exempt' and 'torunblocked' rights. All members of a particular user group will have access to these rights. The individual rights that are assigned to user groups are listed at Special:ListGroupRights. Terms like rights, permissions, bits and flags can refer to both user groups and the individual rights assigned to them.
Permissions requested at Requests for permissions only have local rights on the English Wikipedia wiki. But members of global user groups have rights across all Wikimedia Foundation wikis, although that access can sometimes be restricted by local wiki policies. Users registered at Wikimedia wikis also have registered user rights to other Wikimedia wikis if their account is a SUL or unified login account. For SUL accounts, both local and global user group membership across Wikimedia wikis can be viewed at Special:CentralAuth.
The system-generated technical permissions are listed at Special:ListGroupRights.
Contributors who are not logged in are identified by their IP address rather than a user name, whether or not they have already registered an account. They may read all Wikipedia pages (except restricted special pages), and edit pages that are not protected (including Pending changes protected/move-protected articles). They may create talk pages in any talk namespace but need to ask for help to create pages in some parts of the wiki. They cannot upload files or images. They must answer a CAPTCHA if they wish to make an edit which involves the addition of external links, and click a confirm link to purge pages. All users may also query the site API in 500-record batches.
Edit screens of unregistered users are headed by a banner that reads:
Registered users may immediately e-mail other users if they activate an email address in their user preferences. All logged-in users may mark edits as minor. They may purge pages without a confirmation step, but are still required to answer a CAPTCHA when adding external links. They may save books to their userspace but not the Books namespace. They may also customize their Wikimedia interface and its options as they wish, via Special:Preferences or by adding personal CSS or JavaScript rules to their vector.css or vector.js files.
Several actions on the English Wikipedia are restricted to user accounts which were created a certain number of days. Although the precise requirements for autoconfirmed status vary according to circumstances, most English Wikipedia user accounts that are more than four days old and have made at least 10 edits (including deleted ones) are considered autoconfirmed. However, users with IPBE editing through the Tor network are subjected to much stricter autoconfirmed thresholds: 90 days and 100 edits.
Autoconfirmed or confirmed users can create articles, move pages, edit semi-protected pages, and upload files or upload a new version of an existing file. Autoconfirmed users are no longer required to enter a CAPTCHA for most events and they may save books to the Books namespace. In addition, the Edit filter has a number of warning settings that only affect editors who are not autoconfirmed.
In some situations, it is necessary for accounts to be exempted from the customary confirmation period. The 'confirmed' group contains the same rights as the 'autoconfirmed' pseudo-group, but can be granted by administrators28 40897 extended confirmed users.
Administrators are volunteer editors who are granted the rights by the community at Requests for Adminship (RfA). The RfA process involves considerable discussion and examination of the candidate's activities as an editor. Users who are members of this user group have access to a number of tools to allow them to carry out certain functions on the wiki. The tools cover processes such as page deletion, page protection, blocking and unblocking, and access to modify fully protected pages and the Mediawiki interface. Administrators also have the ability to grant and remove account creator,09 administrators, also known as "sysops" (system operators). The two terms are used interchangeably. 21 bureaucrats.
Members of this group can review other users' edits to articles placed under pending changes protection. This right is automatically assigned to administrators. Prior to September 2014, this right was known as "reviewer".
See Special:ListUsers/reviewer for a list of the 7003 reviewers.
Users who are given the rollback flag ('rollbacker' user group) may revert consecutive revisions of an editor using the rollback feature. This right is automatically assigned to administrators.
See Special:ListUsers/rollbacker for a list of the 5958 rollbackers.
Members of this group have 'autopatrol', which allows them to have their pages automatically patrolled on the New Pages list. This right is automatically assigned to administrators. Prior to June 2010, this right was known as "autoreviewer".
See Special:ListUsers/autoreviewer for a list of the 3867 autopatrolled users.
Members of this group have 'patrol', which allows them to mark pages created by others as patrolled or reviewed. This right is automatically assigned to administrators.
See Special:ListUsers/patroller for a list of the 645 new page reviewers.
The file mover user right is intended to allow users experienced in working with files to rename them, subject to policy, with the ease that autoconfirmed users already enjoy when renaming Wikipedia articles. This right is automatically assigned to administrators.
See Special:ListUsers/filemover for a list of the 403 additional filemovers.
The page mover user right ('extendedmover' user group) is intended to allow users who have demonstrated a good understanding of the Wikipedia page naming system to rename pages and subpages without leaving redirects, subject to policy. This right is automatically assigned to administrators.
See Special:ListUsers/extendedmover for a list of the 225 page movers.
Users who are given the accountcreator flag ('accountcreator' user group) are not affected by the 6 account creation limit per day per IP, and can create accounts for other users without restriction. Users in this group can also override the anti-spoof checks on account creation. This right is automatically assigned to administrators and bureaucrats.[6] Additionally, account creators are able to create accounts with names that are otherwise blocked by the title blacklist.
See Special:ListUsers/accountcreator for a list of the 35 additional account creators. 119 event coordinators. 167 template editors. 341 affected users.
Members of the edit filter manager group can create, modify, enable, disable, and delete edit filters as well as view private filters and their associated logs. This right is not assigned to administrators by default but they are allowed to grant the user right to themselves.
See Special:ListUsers/abusefilter for a list of the 157 edit filter managers. All users can check their log entries on the Special:AbuseFilter pages.
Members of the edit filter helper group can view private edit filters and their associated logs. This access is also included in the administrator groups.
See Special:ListUsers/abusefilter-helper for a list of the 12 edit filter helpers. All users can check their log entries on the Special:AbuseFilter pages.
Users who are given the oversight flag ('oversight' user group) have access to additional functions on the deletion and block screens through which they can hide revisions of pages from all other users, and Special:Log/suppress, where they can view a log of such actions and the content of the hidden revisions. This right is only granted to exceedingly few users who are at least 18 years old and have signed the Wikimedia Foundation's confidentiality agreement for nonpublic information. Oversighters are also required to have passed an "RfA or RfA-identical process".[9]
See Special:ListUsers/oversight for a list of the 49 Oversighters. at least 18 years old and have signed the Wikimedia Foundation's confidentiality agreement for nonpublic information. As Checkusers have access to deleted revisions, they are also required to have passed an "RFA or RFA-identical process".[9]
See Special:ListUsers/checkuser for a list of the 43 CheckUsers..
Members of this group may send messages to multiple users at once.
See Special:ListUsers/massmessage-sender for a list of the 52 mass message senders.
This access is included with the administrator permission.
Accounts used by approved bots to make pre-approved edits can be flagged as such. Bot accounts are automated or semi-automated, the nature of their edits is well defined, and they will be quickly blocked if their actions vary from their given tasks, so they require the 304 bots.
The founder group was created on the English Wikipedia by developer Tim Starling, without community input, as a unique group for Jimmy "Jimbo" Wales - although Larry Sanger is a co-founder, he was never a member of this group.[10] The group gives Wales full access to user rights. As 'local founder actions' are usually of great interest to the local community, and are only relevant to the English Wikipedia, the 'local founder' right also has the benefit of allowing Wales' actions to be visible in the English Wikipedia rights log. Wales is also a member of the founder global group, which has view-only rights across the Wikimedia network.
The 'researcher' group was created in April 2010 to allow individuals explicitly approved by the Wikimedia Foundation to perform a title search for deleted pages and view deleted history entries but not to view the actual revisions of deleted pages.[11]
See Special:ListUsers/researcher for a list of the 10 current researchers and meta:Research:Special API permissions/Log for further details.
'Transwiki importers' is a group which gives editors the
(import) permission for use on Special:Import. This interface allows users to copy pages, and optionally entire page histories, from certain other Wikimedia wikis. The 'import' permission is also included in the administrators and importers user groups. There are currently 0 users in the transwiki importers group. This group is mostly deprecated and is only available for assignment by stewards following a special community approval discussion.
'Importers' is a similar group which gives editors the
(importupload) permission as well as the
(import) permission for use on Special:Import. Importers have the additional ability to import articles directly from XML (which may come from any wiki site). The 'importupload' permission is also included in the stewards group. See Special:ListUsers/import for the 3 importers. This access is highly restricted and is only available for assignment to a limited number of very trusted users by stewards following a special community approval discussion.
All users can use Special:Export to create an XML export of a page and its history.
See also the import log, transwiki log, Help:Import, and Wikipedia:Requests for page importation.
In general, rights of editors blocked indefinitely should be left as is. Rights specifically related to the reason for blocking may be removed at the discretion of the blocking or unblocking administrators.[12]
Global rights have effects on all public Wikimedia wikis, but their use may be restricted by local policy, see Wikipedia:Global rights policy. For an automatically generated list of global groups with all their permissions, see Special:GlobalGroupPermissions. For a list of users along with their global groups, see Special:GlobalUsers.
Stewardship is an elected role, and stewards are appointed globally across all public Wikimedia wikis.
Users who are members of the 'steward' user group may grant and revoke any permission to or from any user on any wiki operated by the Wikimedia Foundation which allows open account creation. This group is set on MetaWiki, and may use meta:Special:Userrights to set permissions on any Wikimedia wiki; they may add or remove any user from any group configured on metawiki. are also responsible for granting and revoking access levels such as 'oversight' and 'checkuser', as no other group is capable of making such changes except sysadmins/Support and Safety Staff.
Stewards can also act as checkusers, oversighters, bureaucrats or administrators on wikis which do not have active local members of those groups. For example, if a wiki has a passing need for an edit to be oversighted, a steward can add themselves to the 'oversight' user group on that wiki, perform the necessary function, and then remove themselves from the 'oversight' group using their steward rights.
Most steward actions are logged at meta:Special:Log/rights or meta:Special:Log/gblrights (some go to meta:Stewards/Additional log for global changes). See Special:GlobalUsers/steward or meta:Special:ListUsers/steward for a list of users in this group.
Other global groups include WMF staff; sysadmins (system administrators); ombudsmen; OTRS-members (Volunteer Response Team); global bots; global rollbackers; global sysops (not enabled on English Wikipedia); interface editors. See Global rights policy and meta:User groups for information on these, as well as a full list.
^1 Because bureaucrats were granted the ability to do this, stewards would refer most ordinary requests for removal of the sysop permission to them, but retain the right to remove the sysop permission when appropriate (such as emergencies or requests from the Arbitration Committee).
DefaultSettings.php grants the
noratelimituser right to bureaucrats and sysops. | https://readtiger.com/wkp/en/Wikipedia:User_access_levels | CC-MAIN-2018-26 | refinedweb | 2,466 | 53 |
As you would expect, I used HTML and CSS to define the markup and look and feel for the table. JQuery, YUI and Microsoft Ajax libraries are used for client side manipulation. JQuery 1.4.4 and Microsoft Ajax JavaScript libraries are shipped with ASP.NET MVC 3, so if you have Visual Studio 2010, you don’t need to download these separately; when you create an ASP.NET MVC 3 application, these libraries are included in the project template. Just in case, JQuery can be found here and ASP.NET MVC 3 can be found here. YUI 2 can be found here.
As an inversion of control container, I used Castle Windsor which can be found here.
For unit testing, I used NUnit which can be found here.
I used Moq to mock objects which you can find here.
This application will be a simple one and, the main focus will be on how to build a simple HTML table that can be sorted and filtered. We’ll have a single aspx page and two ascx user controls. The two user controls will have basically the same functionality, one will post back the page on every action taken by the user, the other will be Ajax enabled, meaning that it will do partial post backs.
Let’s summarize how this application should work. First time, no filters or sort expressions are enabled, the first page is shown. Below the table, there is a label that displays how many records correspond to the current filtering, the numbers in the last row of the table indicate how many pages are available. The user can change the page size by selecting a different value from the combo box in the top left corner of the page. Every column has a green arrow that, if clicked, brings up a panel with a check box list. Each check box’s text is a possible filter value for the property the column refers to. If no check box is checked, then no filtering is applied by that property name, it is the same as if all of the check boxes were checked. If the “Filter instantly” check box is checked, then when the user checks/unchecks a filter check box, the page is submitted and the filtering is applied instantly. When “Filter instantly” is unchecked, the user needs to click on the “Filter” button located on the filter panel for the filtering to take effect. If the user clicks on the header text, then sorting is applied first in ascending order. If the same header is clicked twice, then sorting is applied in descending order. The “Clear all filters” link in the top left corner of the page resets all filter values, i.e., removes any filter expressions.
If you open up the VS solution, you’ll see three projects. The DomailModel project contains the data that our table will use. The FakePersonsRepository class implements the IPersonsRepository interface and defines some hypothetical persons with hard-coded values. The MvcTableDemo project is an ASP.NET MVC 2 web application and contains the logic for our solution. The MvcTableDemo.Tests project contains unit tests written for the MvcTableDemo project. Let's start dissecting our main project.
DomailModel
FakePersonsRepository
IPersonsRepository
MvcTableDemo
MvcTableDemo.Tests
As I said before, we have an Index.aspx page which contains the common script references that our user controls will use and an if clause that checks the web.config file to decide which user control to instantiate:
if
if (ConfigurationSettings.AppSettings["useAjax"] == "false")
Html.RenderPartial ("ItemsList");
else
Html.RenderPartial ("ItemsListAjax");
In the web.config file, we have the following element:
<appSettings>
<add key="useAjax" value="true" />
</appSettings>
If the value is set to true, the ItemsListAjax.ascx user control will be instantiated, otherwise the ItemsList.ascx is used.
true
The EntitiesController class is our main controller class and has two main methods that handle the requests:
EntitiesController
public ActionResult InitialList
(int pageSize, int page, String sortBy, String sortMode)
public ActionResult MaintainList
(int pageSize, int page, String showFilter, String sortBy,
String sortMode, String scrollTop, FormCollection fc)
I used constructor injection to provide an IPersonsRepository implementation to the main controller class. I used Castle Windsor to set up the dependency on the controller class. To make this work correctly, you need to create a class that subclasses DefaultControllerFactory and create a WindsorContainer object and register all components specified in web.config file. In the Global.asax.cs file's Application_Start() method, you need to set the specified controller factory:
DefaultControllerFactory
WindsorContainer
Application_Start()
...
ControllerBuilder.Current.SetControllerFactory (new WindsorControllerFactory ());
In the web.config file, you need to do the following configuration:
<configSections>
<section
name="castle"
type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler,
Castle.Windsor" />
</configSections>
<castle>
<components>
<component
id="PersonsRepository"
service="DomainModel.IPersonsRepository, DomainModel"
type="DomainModel.FakePersonsRepository, DomainModel">
</component>
</components>
</castle>
...
The InitialList method is a GET method, and it is executed either when the first request to our page is made (i.e. when you type in the URL address in the browser and hit enter) or when the “Clear filter” link is clicked on the ItemsList.ascx user control (in non-Ajax mode). The main difference between the two methods is that the InitialList method filters the data by Request.QueryString value whereas the MaintainList() method filters by the hidden input fields posted back to the server, i.e., by clicking on the controls of the table. Both methods sort the data if the sortBy and sortMode parameters are filled.
InitialList
GET
Request.QueryString
MaintainList()
sortBy
sortMode
First, let’s see what the InitialList method does. For what kind of URLs is this method executed?
/ => pageSize = 4, page = 1, sortMode = "", sortBy =""
4/2 => pageSize = 4, page = 2, sortMode = "", sortBy = ""
/4/2/id/asc?filter=id:1,2 => pageSize = 4, page = 2, sortBy = "id", sortMode = "asc"
/6/1/birthdate/desc?filter=name:George, john,emily,ismarried:true => pageSize = 6, page = 1, sortBy = "birthdate", sortMode = "desc"
The URLs above are mapped by the RouteCollection defined in Global.asax.cs file’s RegisterRoutes method. The last two URLs above are matched by the following pattern:
RouteCollection
RegisterRoutes
routes.MapRoute (
null, // Route name
"{pageSize}/{page}/{sortBy}/{sortMode}", // URL with parameters
new { controller = "Entities", action = "InitialList" },
new { pageSize = @"\d+", page = @"\d+" }
);
The InitialList method calls the FilterByQueryString method which receives the query string (e.g. filter=id:1,2,name=john) and returns a FiltersData object. The FiltersData class defines a dictionary which stores each property name of our Person object together with an array of FilterData. The FilterData class has two properties:
FilterByQueryString
FiltersData
Person
FilterData
FilterData
public String FilterText{…}
public bool IsActive {…}
The FilterText property stores the value of the property of the Person object and the IsActive property stores whether the filter is active or not.
FilterText
IsActive
The FiltersData object is stored in the ViewData[“filters”] dictionary which is used by the user controls to populate the initial state of the filters on the HTML page. The question that arises now is how to store the filter values on our HTML page. In my solution, the generated HTML for the ID filters will be as follows:
ViewData[“filters”]
<div style="display: none;">
…
<div id="div_Name">
<input id="filter_Name_George" name="filter_Name_George"
type="hidden" value="False" />
<input id="filter_Name_John" name="filter_Name_John" type="hidden" value="False" />
…
</div>
…
</div>
This way, the property name (e.g. Name) and the corresponding filter values (e.g. George, John) can be easily extracted with JavaScript code.
Name
George, John
Now let’s turn to the discussion of how the user controls are built up. First, I will describe the ItemsList.ascx user control which is easier. A typical generated markup by the user control would look like this:
<table id="table_Persons">
<thead>
<tr>
<th class="header_ID">
<a id="a_ID" href="#">ID
<img class="sortImage" src="/Content/Images/down-arrow.jpg" alt="" />
<img class="sortImage" src="/Content/Images/up-arrow.jpg" alt="" />
</div>
</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>1</td>
<td>George</td>
<td>12/1/1980 1:14:15 PM</td>
<td>True</td>
</tr>
</tbody>
<tfoot>
<tr>
<td style="text-align: left;" colspan="4">
<a href="#" class="selectedPage">1</a>
<a href="#" class="defaultPage">2</a>
<a href="#" class="defaultPage">3</a>
</td>
</tr>
</tfoot>
</table>
If you look at the markup of ItemsList.ascx user control, you’ll notice the code that generates the HTML table and below that, we have the code that generates the markup for the filter values (see markup above) and some input fields that are filled with data from the ViewData dictionary. All these hidden input fields are enclosed in a “using (Html.BeginForm(…))” statement, this way all these values are submitted to the server on a post-back. JavaScript code is responsible for filling these values before submitting back to the server.
ViewData
using (Html.BeginForm(…))
First, let’s ignore how the filter values get set in JavaScript code, suppose that all the values of the hidden fields are set, the page is submitted and the method MaintainList() is executed. The MaintainList() method has the following signature:
public ActionResult MaintainList (int pageSize, int page,
String showFilter, String sortBy, String sortMode, String scrollTop FormCollection fc)
The parameters of this method are the same as for the InitialList method except “showFilter” of type String and “fc” which is of type FormCollection. The fc object contains all the HTML controls with their values which are inside the form element that was submitted. Because we declared this method such that it includes all the parameters that correspond to the names of the hidden fields except those that store filter values, we need to extract from the fc object only those hidden field values that contain the text “filter” in their name (note that each hidden field for the filters is named by the code:
showFilter
String
fc
FormCollection
fc
form
filter
<%=Html.Hidden ("filter_" + propertyName + "_" + fData.FilterText, fData.IsActive) %>
Granted that we have these values, we can build up a FiltersData object that corresponds to the actual filter expressions set by the user. Then, the actual filtering can be easily done by looking at the FiltersData object’s dictionary. One last thing to note about this method is that it differentiates whether the request is an Ajax request or not: if it is, then it fills up the ViewData dictionary with the relevant values, otherwise a JsonResult object is returned back to the client which in turn contains all the relevant values needed to update the page.
JsonResult
Now let’s turn our attention to the JavaScript file ItemsListJS.js which is referenced by the ItemsList.ascx user control. As I mentioned, I used JQuery to traverse the DOM and to set values and attributes of elements. As we have seen, the logic to filter, sort and navigate through pages of the table are all located server-side, so our JavaScript code needs to prepare the relevant data for the server-side code as well as managing how to show a filter panel when the user clicks on a filter link.
The status of the "cbk_Instant" check box is stored in a cookie, so we don’t need to carry this information back and forth between requests and responses. It is worth mentioning the role of the "hdn_showFilter" hidden field: if instant filtering is on, it stores the property name to which the filter text belongs. We need that because when a filter check box’s status is changed, the page is submitted, and HTTP being a stateless protocol, the browser forgets that we had a filter panel open. We need to know which filter panel should be opened again, and by looking at the value of this hidden field, we can find that out. If this value is an empty string, then it means that no filter panel should be open.
cbk_Instant
hdn_showFilter
string
On each response, we build up a filter panel for each column of the table by calling the createFilterPanel(a, propertyName) function. This function’s parameter "a" is an anchor element with the class value set to "filterButton" and the parameter "propertyName" is the name of the property which the column stores. Notice the HTML markup (in this case the property name is "Name"):
createFilterPanel(a, propertyName)
a
filterButton
propertyName
<th class="header_Name">
<a id="a_ID" href="#">Name</a>
<div style="float: right; width: 0px;"></div>
<a href="#" class="filterButton">
<img src="/Content/Images/up.jpg" alt="" />
</a>
…
</th>
Each filter panel should contain a list of check boxes with their states set based on the values of the hidden fields inside the FORM element:
FORM
<form … >
…
<div id="div_Name">
<input id="filter_Name_George" name="filter_Name_George"
type="hidden" value="False" />
<input id="filter_Name_John" name="filter_Name_John"
type="hidden" value="False" />
…
</div>
…
</form>
The createFilterPanel function builds up the check box list dynamically based on the hidden fields. The filter panel “pnl_Filter” that gets created is of type YAHOO.widget.Panel. The Panel component suits the job to host our check box list. Note that all filter panels are created ahead when the page is finished loading, and a filter panel is opened only when the user clicks on the respective anchor element. An important thing to notice is that when a filter panel is opened, we loop through the check box list for the property name and insert the attribute “initialValue” for each check box element with the value of the current state of the check box. Why we need that? Imagine the following scenario: instant filtering is off and no filters are applied; the user clicks on a filter anchor, a filter panel is opened and the user checks some check boxes but before he closes the filter panel, he doesn’t push the filter button on the panel. This way no filtering is applied but the check boxes remain checked, so when this same filter panel is opened again the user will see the checked check boxes and will notice that filtering by those check boxes was not applied. We want to eliminate this kind of behavior.
createFilterPanel
pnl_Filter
YAHOO.widget.Panel
Panel
initialValue
The following code sets the “initialValue” attribute:
$("#pnl_Filter_" + propertyName + " input[type='checkbox']").each(function (i, input) {
$(input).attr("initialValue", $(input).attr("checked"));
});
If instant filtering is on, when a filter check box’s status is changed, the page is submitted but we don’t want the user to be able to mess around with the filter check boxes by checking/unchecking them while the filtering is in progress. Because of this, we disable the controls on the filter panel by calling the changePopupStatus function. This function receives the value that indicates whether to enable or disable the controls and the id of the filter panel. Note that we don’t need to enable the filter panel because a full post-back invalidates this setting.
changePopupStatus
We omitted the discussion of the code that sets up the filter hidden fields. The setupFilterValues (propertyName) function’s role is to loop through the check box list which corresponds to the property name received as a parameter and set the respective hidden fields’ values according to the check boxes’ state:
setupFilterValues
function setupFilterValues(propertyName) {
$("#pnl_Filter_" + propertyName + " input").each(function (i, input) {
var hdn_input =
$("#div_" + propertyName + " input[name='filter_" +
propertyName + "_" + input.value + "'][type='hidden']");
hdn_input.attr("value", input.checked);
});
};
Some words about sorting: the hidden field “hdn_sortMode” stores the property name by which the sorting should occur and the hidden field “hdn_sortMode” stores the mode of the sorting (“asc” or “desc”). When the user clicks on the header text of a column, the sorting is applied instantly. The following code handles the click event on the header anchor:
hdn_sortMode
asc
desc
a_HeaderName.click(function () {
if (hdn_SortBy.attr("value") == propertyName &&
hdn_SortMode.attr("value") == "asc") {
hdn_SortMode.attr("value", "desc");
}
else {
hdn_SortBy.attr("value", propertyName);
hdn_SortMode.attr("value", "asc");
}
submit(false);
});
The ItemsListAjax.ascx and ItemsListAjaxJS.js files are similar to the discussed ones, so I will only highlight the major differences between them.
On the ItemsListAjax user control, we have an IMG element ”img_Loader”; this will be shown when an Ajax request is made. The hidden fields inside the FORM element are the same as in the case of the ItemsList user control, as a matter of fact we don’t need the hdn_showFillter control because this time we’ll do partial post-backs, and the filter panels we’ll be created only once, i.e. on the first response from the server. But to use the same controller method MaintainList(), we need to accommodate to its parameter list.
ItemsListAjax
img_Loader
ItemsList
hdn_showFillter
MaintainList()
In the case of ItemsListAjaxJS.js file, we’ll have some more differences compared to its non-Ajax version. The first thing you’ll notice is that there is attached an event handler to the submit function of the FORM element. Inside this function, there is attached a custom function to the JQuery.ajax object’s success property. This function gets called if the request succeeds. Inside this function, we need to update the content of our table.
JQuery.ajax
If you look at the MaintainList() method, you’ll notice that it returns a JsonResult object. This object’s Data property contains a custom object that we send back to the client. It is defined as:
Data
var resultData =
new
{
Items = currentItems,
Pages = (int)Math.Ceiling ((double)_allItems.Count () / pageSize),
Page = page,
SortBy = sortBy,
SortMode = sortMode,
NumberOfRecords = _allItems.Count ()
};
return new JsonResult { Data = resultData };
As you can see, the resultData object contains all the relevant information we need to update our table on the client side. I’ll describe how the TR elements of the table are updated. The following code does this:
resultData
TR
$(result.Items).each(function (i, item) {
var tr = "<tr>";
$.each(item, function (property, value) {
if (value.toString().indexOf("Date") != -1) {
var re = /-?\d+/;
var m = re.exec(value);
var d = new Date(parseInt(m[0]));
value = d.format("m/d/yyyy HH:MM:ss TT");
}
tr += "<td>" + value + "</td>";
});
tr += "</tr>";
tbody.append(tr);
});
The code above iterates through all the items of the result object and creates a TR element for each one. To access the property values of each item (e.g. Person object in our case) we iterate through the properties of the item. We could also access the property values by calling item.ID, item.Name, etc. but in that case, the code would depend on the Person class and it won’t work for any other. Note that in the case of the .NET DateTime object, the JSON string representation of it doesn’t look the same as what the .NET ToString() method returns (e.g. it renders like "/Date(545346000000)/" which is clearly not what we expect), so we need to convert it to a JavaScript Date object and then format it according to our needs.
item.ID
item.Name
DateTime
string
ToString()
/Date(545346000000)/
Date
As I mentioned previously, the filter panels will be created only once by the createFilterPanel () function. When instant filtering is on and a filter check box's status is changed, we need to disable the controls on the filter panel to prevent the user from messing around with the other check boxes. In contrast with the non-Ajax version, the filter panel is not recreated on partial post-backs, so after we have disabled it, we need to enable it again. As I mentioned the changePopupStatus() function needs the id of the filter panel which has to be enabled or disabled. Because of this, we store the id of the disabled filter panel in the $.disabledPopupName variable; and when the Ajax response arrives, we know which filter panel has to be enabled.
createFilterPanel ()
changePopupStatus()
$.disabledPopupName
If you take a look at the MvcTableDemo.Tests project, you'll notice 3 classes. The PersonsRepositoryCreator class has a static method that creates a mocked IPersonRepository object with the specified number of Person objects. The 2 other classes contain test methods for the ItemsListController's InitialList() method and MaintainList() method, respectively. The naming of the methods are such that they explain what they actually test so I won't go into the details now. Note that in the case of tests written for the MaintainList() method, there are two versions for a test (where it is necessary): one that tests something in non AJAX mode and one that tests something in AJAX mode.
PersonsRepositoryCreator
static
IPersonRepository
ItemsListController
InitialList()
NULL
NULL
That was all, I hope you enjoyed it.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Filters[pInfo.Name] = items.Select(x => x.GetType().GetProperty(pInfo.Name).GetValue(x, null).ToString()).Distinct().Select(x => new FilterData(x)).ToArray();
public class Person
{
public int? ID { get; set; }
public String Name { get; set; }
public DateTime? BirthDate { get; set; }
public bool? IsMarried { get; set; }
}
Filters[pInfo.Name] =
items.Select (x => x.GetType ().GetProperty (pInfo.Name).GetValue (x, null).ToString ())
.Distinct ().Select (x => new FilterData(x)).ToArray ();
Filters[pInfo.Name] =
items.Select(x => x.GetType().GetProperty(pInfo.Name).GetValue(x, null) == null ? string.Empty :
x.GetType().GetProperty(pInfo.Name).GetValue(x, null).ToString())
.Distinct().Select(x => new FilterData(x)).ToArray();
var value = pInfo.GetValue (item, null).ToString ();
var value = (pInfo.GetValue(item, null) ?? string.Empty).ToString();
x.GetType().GetProperty(pInfo.Name).GetValue(x, null)
Filters[pInfo.Name] =
items.Select (x =>
{
var value = x.GetType ().GetProperty (pInfo.Name).GetValue (x, null);
return value != null ? value.ToString () : String.Empty;
}).Distinct ().Select (x => new FilterData (x)).ToArray ();
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/126064/Sorting-and-Filtering-an-HTML-Table-using-ASP-NET | CC-MAIN-2019-35 | refinedweb | 3,625 | 55.34 |
Solution for Programmming Exercise 2.2
This page contains a sample solution to one of the exercises from Introduction to Programming Using Java.
Exercise 2.2:
Write a program that simulates rolling a pair of dice. You can simulate rolling one die by choosing one of the integers 1, 2, 3, 4, 5, or 6 at random. The number you pick represents the number on the die after it is rolled. As pointed out in Section 2.5, The expression
(int)(Math.random()*6) + 1
does the computation you need to select a random integer between 1 and 6. You can assign
When designing a program, one of the first things you should ask yourself is, "What values do I need to represent?" The answer helps you decide what variables to declare in the program. This program will need some variables to represent the numbers showing on each die and the total of the two dice. Since these numbers are all integers, we can use three variables of type int. I'll call the variables die1, die2, and roll. The program begins by declaring the variables:
int die1; int die2; int roll;
In the actual program, of course, I've added a comment to explain the purpose of each variable. The values of die1 and die2 can be computed using the expression given in the exercise:
die1 = (int)(Math.random()*6) + 1; die2 = (int)(Math.random()*6) + 1;
Note that even though the expressions on the right-hand sides of these assignment statements are the same, the values can be different because the function Math.random() can return different values when it is called twice.
We can then compute roll = die1 + die2 and use three System.out.println statements to display the three lines of output:
System.out.println("The first die comes up " + die1); System.out.println("The second die comes up " + die2); System.out.println("Your total roll is " + roll);
Note that I've chosen to use the concatenation operator, +, to append the value of die1 onto the string "The first die comes up". Alternatively, I could use two output statements:
System.out.print("The first die comes up "); System.out.println(die);
I'll also note that I could get away without the variable roll, since I could output the value of the expression die1 + die2 directly:
System.out.println("Your total roll is " + (die1 + die2));
However, it's generally better style to have a meaningful name for a quantity. By the way, the parentheses around (die1 + die2) are essential because of the precedence rules for the + operator. You might try to experiment with leaving them out and see what happens.
public class RollTheDice { /* This program simulates rolling a pair of dice. The number that comes up on each die is output, followed by the total of the two dice. */ public static void main(String[] args) { int die1; // The number on the first die. int die2; // The number on the second die. int roll; // The total roll (sum of the two dice). die1 = (int)(Math.random()*6) + 1; die2 = (int)(Math.random()*6) + 1; roll = die1 + die2; System.out.println("The first die comes up " + die1); System.out.println("The second die comes up " + die2); System.out.println("Your total roll is " + roll); } // end main() } // end class | http://math.hws.edu/javanotes/c2/ex2-ans.html | crawl-001 | refinedweb | 550 | 67.04 |
CSC/ECE 517 Fall 2012/ch2a 2w31 up
SaaS - 5.3 - The TDD cycle: red-green-refactor[1]
Introduction
Test Driven Development(TDD) is an evolutionary approach to software development that requires a developer to write test code before the actual code and then write the minimum code to pass that test. This process is done iteratively to ensure that all units in the application are tested for optimum functionality, both individually and in synergy with others. This produces applications of high quality in less time.
BDD Vs TDD
Behavior driven development (BDD) is a software development process built on TDD. BDD helps to capture requirement as user stories both narrative and scenarios. "User stories in BDD are written with a rigid structure having a narrative that uses a Role/Benefit/Value grammar and scenarios that use a Given/When/Then grammar" [2]. TDD helps to capture this behavior directly using test cases. Thus TDD captures low level requirements whereas BDD captures high level requirements.
Concepts
The following topics provide an overview of a few concepts which would be helpful in understanding the TDD cycle and its example better.
Seams
The concept of CWWWH(code we wish we had) is about a missing/buggy piece of code in TDD. In test driven development, we generally write a test and then the implement the functionality. But it may happen that the program which implements a certain feature, is dependent on some other feature which is not yet implemented or has errors. That piece of code is named as "CWWWH".
Given that scenario, a test case for such a functionality is expected to fail owing to the dependency. Nevertheless, the tests can be made to pass, with a concept called Seams, defined by Michael Feather's in his book Working Effectively With Legacy Code[3]
A seam is a place where one can alter the application's behavior without editing the actual code. This helps us to isolate a code from its dependent counterpart. Thus, one can work with a given code, abstracting out the implementation of its dependency, and assuming it works right. This is explained clearly with an example below.
RSpec
RSpec[4] is a great testing tool, which provides features like :
- textual descriptions of examples and groups (rspec-core)
- extension for Rails (rspec-rails)
If we are testing a rails application specifically (as opposed to an arbitrary Ruby program), we need to be able to simulate
* posting to a controller action
* the ability to examine the expected view
- extensible expectation language (rspec-expectations), letting an user express expected outcomes of an object.
Uses instance methods like "should" and "should_not" to check for equivalence, identity, regular expressions, etc.
[1,2,3].should include(1, 2)
- built-in mocking/stubbing framework (rspec-mocks):
Rspec has built-in facility to setup mock methods or stub objects. There can be several dependencies of a method, that is tested. It is important to test(unit test) only a particular behavior and mock out the other methods that are called from there. Rspec provides "should_receive" clause which overwrites the foreign method implementation and makes sure that missing methods or buggy methods do not affect the current behavior that is tested.
obj.should_receive(a).with(b)
For any given object, we can set up an expectation that the object should receive a method call. In this case the method name is specified as "a" and it is called on object "obj". The method could be optionally called with an argument which is specified using "with". If arguments exist then two things are checked-- a) whether the method gets called b) whether correct arguments are passed. If arguments are absent then the second check is skipped.
Red – Green – Refactor
The following steps define the TDD cycle :
Add a Test
- Think about one thing the code should do : The developer identifies a new functionality from the use cases and user stories, which contain detailed requirements and constraints.
- Capture that thought in a test, which fails : An automated test case (new or a variant of an existing test) is then written, corresponding to the new feature, taking into consideration all possible inputs, error conditions and outputs. Run all the automated tests. The new test inevitably fails because it is written prior to the implementation of the feature. This validates that the feature is rightly tested and would pass if the feature is implemented correctly, which drives a developer to the next step.
Implement the feature
- Write the simplest possible code that lets the test pass : Minimal code is written to make the test pass. The entire functionality need not be implemented in this step. It is not uncommon to see empty methods or methods that simply return a constant value. The code can be improved in the next iterations. Future tests will be written to further define what these methods should do. The only intention of the developer is to write "just enough" code to ensure it meets all the tested requirements and doesn't cause any other tests to fail. [5] Run the tests again. Ideally, all the tests should pass, making the developer confident about the features implemented so far.
Refactor
- DRY out commonality with other tests : Remove duplication of code wherever possible. Organizational changes can be made as well to make the code appear cleaner so it’s easier to maintain. TDD encourages frequent refactoring. Automated tests can be run to ensure the code refactoring does not break any existing functionality
Iterate
- Continue with the next thing (new or improvement of a feature), the code should do.
- Aim to have working code always.
Examples
TMDb : The Movie Database rails application
New Feature : Search TMDb for movies
Controller Action : Setup
- Add the route to config/routes.rb :
To add a new feature to this Rails application, we first add a route, which maps a URL to the controller method [6]
# Route that posts 'Search TMDb' form
post '/movies/search_tmdb'
This route would map to
MoviesController#search_tmdbowing to Convention over Configuration, that is, it would post to the search_tmdb "action" in the Movies "controller".
- Create an empty view:
When a controller method is triggered, it gets some user input, does some computation and renders a view. So, the invocation of a controller needs a view to render, which we need to create even though it is not required to be tested. We start with an empty view. "touch" unix command is used to create a file of size 0 bytes.
touch app/views/movies/search_tmdb.html.haml
The above creates a view in the right directory with the file name, same as Movie controller's method name. (Convention over Configuration)
This view can be refined in later iterations and user stories are used to verify if the view has everything that is needed.
- Replace fake “hardwired” method in movies_controller.rb with empty method:
If the method has a default functionality to return an empty list, then replace the method to one that does nothing.
def search_tmdb
end
What model method?
It is the responsibility of the model to call TMDb and search for movies. But, no model method exists as yet to do this.
One may wonder that to test the controller's functionality, one has to get the model method working. Nevertheless, that is not required.
Seam is used in this case, to test the code we wish we had(“CWWWH”). Let us call the "non-existent" model method as Movie.find_in_tmdb
Testing plan
- Simulate POSTing search form to controller action.
- Check that controller action tries to call Movie.find_in_tmdb with the function argument as data from the submitted form. Here, the functionality of the model is not tested, instead the test ensures the controller invokes the right method with the right arguments.
- The test will fail (red), because the (empty) controller method doesnʼt call find_in_tmdb.
- Fix controller action to make the test pass (green).
Test MoviesController : Code[7]
-------- movies_controller.rb -------- class MoviesController < ApplicationController def search_tmdb end end -------- movies_controller_spec.rb -------- require 'spec_helper' describe MoviesController do describe 'searching TMDb' do it 'should call the model method that performs TMDb search' do Movie.should_receive(:find_in_tmdb).with('hardware') post :search_tmdb, {:search_terms => 'hardware'} end end end
The above test case for the MoviesController has an 'it' block, which has a string defining what the test is supposed to check. A do-end block to that 'it' has the actual test code.
The line
Movie.should_receive(:find_in_tmdb).with('hardware') creates an expectation that the Movie class should receive the find_in_tmdb method call with a particular argument. An assumption is made here, that the user has actually filled in hardware in the search_terms box on the page that says Search for TMDb.
Once the expectation is setup, we simulate the post using rspec-rails
post :search_tmdb, {:search_terms => 'hardware'} as if it were a form and was submitted to the search_tmdb method in the controller (after looking up a route). The hash in this call is the contents of the params, which quacks like a hash, and can be accessed inside the controller method.
Thus, the test would fail if:
- once the post action is completed, should_receive finds that the method find_in_tmdb was not invoked.
- and if the method was indeed called, that single argument 'hardware' was not passed.
Testing
Autotest runs continuously and watches for any change in a file. Once the changes are saved, the test corresponding to the change is automatically run and the result is reported immediately.
Run the test written above.
The test FAILS .
Reason for error : MoviesController searching TMDb should call the model method that performs TMDb search
FailureError: Movie.should_receive(:find_in_tmdb).with('hardware') expected: 1 time received: 0 times
The test is expressing what is expected (identifying the right reason of failure).
To make the test pass, we change the MovieController's search_tmdb method to invoke the Model's find_in_tmdb method.
Changes made to the controller
-------- movies_controller.rb -------- class MoviesController < ApplicationController def search_tmdb Movie.find_in_tmdb(params[:search_terms]) end end
Movie.find_in_tmdb(params[:search_terms]) invokes the model's method with the value of search_Terms from the params hash.
Tests are run again. The test PASSES saying MoviesController searching TMDb should call the model method that performs TMDb search pass with 0 failures.
The following explains how invoking the non-existent method works and why the test case passed.
Use of Seams
The test fails as the controller is empty and the method does not call find_in_tmdb. The test case is made to pass by having the controller action invoke "Movie.find_in_tmdb" (which is, the code we wish we had) with data from submitted form. So here the concept of Seams comes in.
should_receive uses Rubyʼs open classes to create a seam for isolating controller action from behavior of a missing or buggy controller function. Thus, it overrides the find_in_tmdb method. Although it does not implement the logic of the method, it checks whether it is being called with the right argument. Even if the actual code for find_in_tmdb existed, the method defined in should_receive would have overwritten it. This is something we would need, as we don't want to be affected by bugs in some other code that we are not testing. This is an example of stub and every time a single test case is completed, all the mocks and stubs are automatically refreshed by Rspec. This helps to keep tests independent.
Return value from should_receive
In this example find_in_tmdb should return a set of movies for which we had called the function. Thus we should be checking its return value. However this should be checked in a different test case. Its important to remember that each "it" clause or each spec should test only one clause/behavior. In this example the first requirement was to make sure that find_in_tmdb is called with proper arguments and the second requirement is make sure that the result of search_tmdb is passed to the view so that it can be rendered. We have two different requirements and hence there must be two different test cases.
Conclusion
Advantages and Disadvantages
Advantages
- ensures the code is tested and enables you to retest your code quickly and easily, since it’s automated.
- immediate feedback
- improves code quality
- less time spent for debugging
- faster identification of the problem
- early and frequent detection of errors prevent them from becoming expensive and hard problems later[8]
Disadvantages
- Does not scale well with web-based GUI or database development [9]
- reliant on refactoring and programming skills
- increases the project complexity and delivery time
- tightly coupled with the developer's interpretation, since developer writes the test cases mostly [10]
References
- ↑ Video :SaaS - 5.3 - The TDD cycle: red-green-refactor
- ↑ TDD vs BDD
- ↑ Working Effectively With Legacy Code by Michael Feather
- ↑ RSpec
- ↑ What is Test Driven Development?
- ↑ Routing in Rails
- ↑ TMDb : MoviesController Test Code
- ↑ TDD Advantages
- ↑ Disadvantages of TDD
- ↑ Shortcomings of TDD
See Also
- Wiki page of TDD
- Definition of TDD
- TDD in different languagues
- The Newbie’s Guide to Test-Driven Development
- TDD vs BDD | http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2012/ch2a_2w31_up | CC-MAIN-2017-17 | refinedweb | 2,169 | 62.07 |
Modules provide a container for your app's source code, resource files, and app level settings, such as the module-level build file and Android manifest file. Each module can be independently built, tested, and debugged.
Android Studio uses modules to make it easy to add new devices to
your project. By following a few simple steps in Android Studio,
you can create a module to contain code that's specific to a device type,
such as Wear OS or Android TV. Android Studio automatically
creates module directories, such as source and resource directories, and
a default
build.gradle file appropriate for the device type.
Also, Android Studio creates device modules with recommended build configurations,
such as using the Leanback library for Android TV modules.
This page describes how to add a new module for a specific device.
Android Studio also makes it easy to add a library or Google Cloud module to your project. For details on creating a library module, see Create a Library Module.
Create a new module
To add a new module to your project for a new device, proceed as follows:
- Click File > New > New Module.
- In the Create New Module window that appears, Android Studio offers the following device modules:
- Phone & Tablet Module
- Wear OS Module
- Android TV Module
- Glass Module
- In the Configure your new module form, enter the following details:
- Application Name: This name is used as the title of your app launcher icon for the new module.
- Module Name: This text is used as the name of the folder where your source code and resources files are visible.
- Package Name: This is the Java namespace for the code in your module. It is added as the
packageattribute in the module's Android manifest file.
- Minimum SDK: This setting indicates the lowest version of the Android platform that the app module supports. This value sets the
minSdkVersionattribute in the
build.gradlefile, which you can edit later.
Then click Next.
- Depending on which device module you selected, the following page displays a selection of appropriate code templates you can select to use as your main activity. Click an activity template with which you want to start, and then click Next. If you don't need an activity, click Add No Activity, click Finish, and then you're done.
- If you chose an activity template, enter the settings for your activity on the Customize the Activity page. Most templates ask for an Activity Name, Layout Name, Title, and Source Language, but each template has activity-specific settings. Click Finish. When you create an app module with an activity template, you can immediately run and test the module on your device.
Android Studio creates all the necessary files for the new module and syncs the project with the new module gradle files. Adding a module for a new device also adds any required dependencies for the target device to the module's build file.
Once the Gradle project sync completes, the new module appears in the Project window on the left. If you don't see the new module folder, make sure the window is displaying the Android view.
Import a module
To import an existing module into your project, proceed as follows:
- Click File > New > Import Module.
- In the Source directory box, type or select the directory of the module(s) that you want to import:
- If you are importing one module, indicate its root directory.
- If you are importing multiple modules from a project, indicate the project folder. For each module inside the folder, a box appears and indicates the Source location and Module name. Make sure the Import box is checked for each module that you want to import.
- Type your desired module name(s) in the Module name field(s).
- Click Finish.
Once the module is imported, it appears in the Project window on the left.
Next steps
Once you've added a new module, you can modify the module code and resources, configure module build settings, and build the module. You can also run and debug the module like any other app.
- To learn about build settings for a module, see The Module-level Build File.
- To build and run a specific module, see Select and build a different module.
You'll also want to add code and resources to properly support the new device. For more information about how to develop app modules for different device types, see the corresponding documentation:
- For Wear OS modules: Creating and Running a Wearable App
- For Android TV modules: Get Started with TV Apps
- For Glass modules: GDK Quick Start
As you develop your new module, you might create device independent code that is already duplicated in a different app module. Instead of maintaining duplicate code, consider moving the shared code to a library module and adding the library as a dependency to your app modules. For more information on creating a library module and adding it as a dependency, see Create an Android Library. | https://developer.android.com/studio/projects/add-app-module?hl=FR | CC-MAIN-2020-29 | refinedweb | 832 | 61.16 |
If we have a close look at LEGO® products, we can see that they are all made of the same building blocks. However, the composition of these blocks is the key differentiator for whether we are building a castle or space ship. It's pretty much the same for Podman, and its sibling projects Buildah, Skopeo, and CRI-O. However, instead of recycled plastic, the building blocks for our container tools are made of open source code. Sharing these building blocks allows us to provide rock-solid, enterprise-grade container tools. Features ship faster, bugs are fixed quicker, and the code is battle-tested. And, well, instead of bringing joy into playrooms, the container tools bring joy into data centers and workstations.
The most basic building block for our container tools is the containers/storage library, which locally stores and manages containers and container images. Going one level higher, we find the containers/image library. As the name suggests, this library deals with container images and is incredibly powerful. It allows us to pull and push images, manipulate images (e.g., change layer compression), inspect images along with their configuration and layers, and copy images between so-called image transports. A transport can refer to a local container storage, a container registry, a tar archive, and much more. Dan Walsh wrote a great blog post on the various transports that I highly recommend reading..
Managing container registries with registries.conf
The
registries.conf configuration is in play whenever we push or pull an image. Or, more generally speaking, whenever we contact a container registry. That's an easy rule of thumb. The systemwide location is
/etc/containers/registries.conf, but if you want to change that for a single user, you can create a new file at
$HOME/.config/containers/registries.conf.
So let's dive right into it. In the following sections, we will go through some examples that explain the various configuration options in the
registries.conf. The examples are real-world scenarios and may be a source of inspiration for tackling your individual use case.
Pulling by short names
Humans are lazy, and I am no exception to that. It is much more convenient to do a
podman pull ubi8 rather than
podman pull registry.access.redhat.com/ubi8:latest. I keep forgetting which image lives on which registry, and there are many images and a lot of registries out there. There is Docker Hub and Quay.io, plus registries for Amazon, Microsoft, Google, Red Hat, and many other Linux distributions.
Docker addressed our laziness by always resolving to the Docker Hub. A
docker pull alpine will resolve to
docker.io/library/alpine:latest, and
docker pull repo/image:tag will resolve to
docker.io/repo/image:tag (notice the specified repo). Podman and its sibling projects did not want to lock users into using one registry only, so short names can resolve to more than docker.io, and as you may expect, we can configure that in the
registries.conf as follows:
unqualified-search-registries = ['registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'docker.io']
The above snippet is taken directly from the
registries.conf in Fedora 33. It's a list of registries that are contacted in the specified order when pulling a short name image. If the image cannot be found on the first registry, Podman will attempt to pull from the second registry and so on. Buildah and CRI-O follow the same logic but note that Skopeo always normalizes to docker.io.
[ You might also like: What's the next Linux workload that you plan to containerize? ]
Searching images
Similar to the previous section on pulling, images are commonly searched by name. When doing a
podman search, I usually do not know or simply forgot on which registry the given image lives. When using Docker, you can only search on the Docker Hub. Podman gives more freedom to users and allows for searching images on any registry. And unsurprisingly,
registries.conf has a solution.
Similar to pulling, the
unqualified-search-registries are also used when using a short name with
podman search. A
podman search foo will look for images named foo in all unqualified-search registries.
Large corporations usually have on-premises container registries. Integrating such registries in your workflow is as simple as adding them to the list of unqualified-search registries.
Short-name aliases
Newer versions of Podman, Buildah, and CRI-O ship with a new way of resolving short names, primarily by using aliases. Aliases are a simple TOML table
[aliases] in the form
"name" = "value", similar to how Bash aliases work. We maintain a central list of aliases together with the community upstream at
github.com/containers/shortnames. If you own an image and want to have an alias, feel free to open a pull request or reach out to us.
Some distributions, like RHEL8, plan on shipping their own lists of short-names to help users and prevent them from accidentally pulling images from the wrong registry.
Explaining how short-name aliases work in detail would expand this blog post significantly, so if you are interested, please refer to an earlier blog post on short-name aliases.
Configuring a local container registry
Running a local container registry is quite common. I have one running all the time, so I can cache images and develop and test new features such as auto-updates in Podman. The bandwidth in my home office is limited, so I appreciate the fast pushes and pulls. Since everything is running locally, I don't need to worry about setting up TLS for the registry. That implies connecting to the registry via HTTP rather than via HTTPS. Podman allows you to do that by specifying
--tls-verify=false on the command line, which will skip TLS verification and allow insecure connections.
An alternative approach to skipping TLS verification via the command line is by using the
registries.conf. This may be more convenient, especially for automated scripts where we don't want to manually add command-line flags. Let's have a look at the config snippet below.
[[registry]] location="localhost:5000" insecure=true
The format of the registries.conf is TOML. The double braces of
[[registry]] indicate that we can specify a list (or table) of
[registry] objects. In this example, there is only one registry where the location (i.e., its address) is set to
localhost:5000. That is where a local registry is commonly running. Whenever the
containers/image library connects to a container registry with that location, it will look up its configuration and act accordingly. In this case, the registry is configured to be insecure, and TLS verification will be skipped. Podman and the other container tools can now talk to the local registry without getting the connections rejected.
Blocking a registry, namespace, or image
In case you want to prevent users or tools from pulling from a specific registry, you can do as follows.
[[registry]] location="registry.example.org" blocked=true
The
blocked=true prevents connections to this registry, or at least to blocks pulling data from it.
However, it's surprisingly common among users to block only specific namespaces or individual images but not the entire registry. Let's assume that we want to stop users from pulling images from the namespace
registry.example.org/namespace. The
registries.conf will now look like this:
[[registry]]] location="registry.example.org" prefix="registry.example.org/example" blocked=true
I just introduced a new config knob:
prefix. A prefix instructs only to select the specified configuration when we attempt to pull an image that is matched by the specific prefix. For example, if we would run a
podman pull registry.example.org/example/image:latest, the specified prefix would match, and Podman would be blocked from pulling the image. If you want to block a specific image, you can set it using the following:
prefix="registry.example.org/namespace/image"
Using a prefix is a very powerful tool to meet all kinds of use cases. It can be combined with all knobs of a
[registry]. Note that using a prefix is optional. If none is specified, the prefix will default to the (mandatory) location.
Mirroring registries
Let's assume that we are running our workload in an air-gapped environment. All our servers are disconnected from the internet. There are many reasons for that. We may be running on the edge or running in a highly security-sensitive environment that forbids us from connecting to the internet. In this case, we cannot connect to the original registry but need to run a registry that mirrors the local network's contents.
A registry mirror is a registry that will be contacted before attempting to pull from the original one. It's a common use case and one of the oldest feature requests in the container ecosystem.
[[registry]] location="registry.access.redhat.com" [[registry.mirror]] location="internal.registry.mirror"
With this configuration, when pulling the Universal Base Image via
podman pull ubi8, the image would be pulled from the mirror instead of Red Hat's container registry.
Note that we can specify multiple mirrors that will be contacted in the specified order. Let's have a quick look at what that means:
[[registry]] location="registry.example.com" [[registry.mirror]] location="mirror-1.com" [[registry.mirror]] location="mirror-2.com" [[registry.mirror]] location="mirror-3.com"
Let's assume we are attempting to pull the image
registry.example.com/myimage:latest. Mirrors are contacted in the specified order (i.e., top-down), which means that Podman would first try to pull the image from
mirror-1.com. If the image is not present or the pull fails for other reasons, Podman would contact
mirror-2.com and so forth. If all mirror pulls fail, Podman will contact the main
registry.example.com.
Note that mirrors also support the
insecure knob. If you want to skip TLS verification for a specific mirror, just add
insecure=true.
Remapping references
As we explored above, a
prefix is used to select a specific
[registry] in the
registries.conf. While prefixes are a powerful means to block specific namespaces or certain images from being pulled, they can also be used to remap entire images. Similar to mirrors, we can use a prefix to pull from a different registry and a different namespace.
To illustrate what I mean by remapping, let's consider that we run in an air-gapped environment. We cannot access container registries since we are disconnected from the internet. Our workload is using images from Quay.io, Docker Hub, and Red Hat's container registry. While we could have one network-local mirror per registry, we could also just use one with the following config.
[[registry]] prefix="quay.io" location="internal.registry.mirror/quay" [[registry]] prefix="docker.io" location="internal.registry.mirror/docker" [[registry]] prefix="registry.access.redhat.com" location="internal.registry.mirror/redhat"
A
podman pull quay.io/buildah/stable:latest will now instead pull
internal.registry.mirror/quay/buildah/stable:latest. However, the pulled image will remain
quay.io/buildah/stable:latest since the remapping and mirroring happen transparently to Podman and the other container tools.
As we can see in the snippet above,
internal.registry.mirror is our network-local mirror that we are using to pull images on behalf of Quay.io, Docker Hub, and Red Hat's container registry. Images of each registry reside on separate namespaces on the registry (i.e., "quay", "docker", "redhat")—simple yet powerful trick to remap images when pulling. You may ask yourself how we can pre-populate the internal mirror with the images from the three registries. I do not recommend doing that manually but to use
skopeo sync instead. With
skopeo sync, a sysadmin can easily load all images onto a USB drive, bring that to an air-gapped cluster, and preload the mirror.
There are countless use cases where such remapping may help. For instance, when using another registry during tests, it may come in handy to transparently pull from another (testing or staging) registry than in production. No code changes are needed.
Tom Sweeney and Ed Santiago used the remapping to develop a creative solution to address the rate limits of Docker Hub. In late November 2020, Docker Hub started to limit the number of pulls per user in a given timeframe. At first, we were concerned because large parts of our testing systems, and continuous integration used Docker Hub images. But with a simple change to the
registries.conf on our systems, Tom and Ed found a great solution. That spared us from the manual and tedious task of changing all images referring to docker.io in our tests.
Advanced configuration management via drop-on config files
Managing configurations is challenging. Our systems are updated all the time, and with the updates may come configuration changes. We may want to add new registries, configure new mirrors, correct previous settings or extend the default configuration of Fedora. There are many motivations, and for certain
registries.conf supports it via so-called drop-in configuration files.
When loading the configuration, the
containers/image library will first load the main configuration file at
/etc/containers/registries.conf and then all files in the
/etc/containers/registries.conf.d directory in alpha-numerical order.
Using such drop-in
registries.conf files is straight forward. Just place a
.conf file in the directory, and Podman will get the updated configuration. Note that tables in the config are merged while simple knobs are overridden. This means, in practice, that the
[[registry]] table can easily be extended with new registries. If a registry with the same prefix already exists, the registry setting will be overridden. The same applies to the
[aliases] table. Simple configuration knobs such as unqualified-search-registries are always overridden.
[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview. ]
Conclusion
The
registries.conf is a core building block of our container tools. It allows for configuring all kinds of properties when talking to a container registry. If you are interested in studying the configuration in greater detail, you can either do a
man containers-registries.conf to the read the man page on your Linux machine or visit the upstream documentation. | https://www.redhat.com/sysadmin/manage-container-registries | CC-MAIN-2021-49 | refinedweb | 2,396 | 58.18 |
XML defines white space as the Unicode characters space (0x20), carriage return (0x0D), line feed (0x0A), and tab (0x09), as well as any combination of them. Other invisible characters, such as the byte order mark and the nonbreaking space (0xA0), are treated the same as visible characters, such as A and $.
White space is significant in XML character data. This can be a little surprising to programmers who are used to languages like Java where white space mostly isn't significant. However, remember that XML is a markup language, not a programming language. An XML document contains data, not code. The data parts of a program (that is, the string literals) are precisely where white space does matter in traditional code. Thus it really shouldn't be a huge surprise that white space is significant in XML.
For example, the following two shape elements are not the same.
<shape>star</shape> <shape> star </shape>
Depending on the context, a particular XML application may choose to treat these two elements the same. However, an XML parser will faithfully report all the data in both shape elements to the client application. If the client application chooses to trim the extra white space from the content of the second element, that's the client application's business. XML has nothing to do with it. | https://flylib.com/books/en/1.130.1.51/1/ | CC-MAIN-2019-47 | refinedweb | 221 | 63.19 |
Opened 5 years ago
Closed 5 years ago
#13381 closed (invalid)
admin:login doesn't work
Description
it's absent in django.contrib.admin.sites.py:
def get_urls(self): from django.conf.urls.defaults import patterns, url, include if settings.DEBUG: self.check_dependencies() def wrap(view, cacheable=False): def wrapper(*args, **kwargs): return self.admin_view(view, cacheable)(*args, **kwargs) return update_wrapper(wrapper, view) # Admin-site-wide views. urlpatterns = patterns('', url(r'^$', wrap(self.index), name='index'), url(r'^logout/$', wrap(self.logout), name='logout'), url(r'^password_change/$', wrap(self.password_change, cacheable=True), name='password_change'), url(r'^password_change/done/$', wrap(self.password_change_done, cacheable=True), name='password_change_done'), url(r'^jsi18n/$', wrap(self.i18n_javascript, cacheable=True), name='jsi18n'), url(r'^r/(?P<content_type_id>\d+)/(?P<object_id>.+)/$', 'django.views.defaults.shortcut'), url(r'^(?P<app_label>\w+)/$', wrap(self.app_index), name='app_list') )
adding it works like a charm. attaching a patch..
Attachments (1)
Change History (2)
Changed 5 years ago by alperkanat
comment:1 Changed 5 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
It's not a case of "doesn't work" - it's a case of "isn't needed". Admin doesn't require a /login url, and unless I'm mistaken, there isn't anywhere in documentation that suggests that it should be available. You just visit the url that you actually want, and the login process is handled as a decorator to that view.
Even if a login view were needed, the patch you provide doesn't work - or rather, it will log you in, but on success, it redirects you to the same login page. Again, the login mechanism for admin is different to the default auth.login process.
patch for adding login named url to be able to use it with namespaces | https://code.djangoproject.com/ticket/13381 | CC-MAIN-2015-14 | refinedweb | 308 | 50.63 |
hi
i have a script that runs in the commandline that needs the normal python sys module.
i can specify the path of the module in blender to get it to run but i need a copy of the normal sys.py as the blender built in modified version has pieces (argv, sys?) removed.
sys.py is not in the python lib, it may be built in to a binary or something - it's not in the source. Does anyone have an idea on where to get a copy of the normal sys.py script from?
sys.py?
Scripting in Blender with Python, and working on the API
Moderators: jesterKing, stiv
Post Reply
3 posts • Page 1 of 1
sys.py doesn't exist!
Hi, I'm sorry but you cannot find sys.py because it doesn't exist: it is a platform dependent module written (I suppose) in C or C++ and compiled into python.
About your problem: what is the platform you're working on?
On Win it is possibile to import the sys module; you can try:
Hope is enough; bye,
F1
About your problem: what is the platform you're working on?
On Win it is possibile to import the sys module; you can try:
I think it also works on Linux boxes and I know it doesn't work on Mac OSX (I'm an iBook owner and I've seen the python support on this platform is incomplete).
Code: Select all
import sys print dir(sys)
Hope is enough; bye,
F1
Post Reply
3 posts • Page 1 of 1 | https://www.blender.org/forum/viewtopic.php?p=8892 | CC-MAIN-2018-43 | refinedweb | 265 | 78.79 |
CodeGuru Forums
>
.NET Programming
>
C-Sharp Programming
> Reading/Writing Initialization files (*.INI)
PDA
Click to See Complete Forum and Search -->
:
Reading/Writing Initialization files (*.INI)
Leo Ayala
June 2nd, 2002, 11:28 AM
I'm new to C# and I'm building one of my first applications that require reading from that has a structure simila to an *.INI file. I could not find any information in the MSDN CD regarding any read/write methods. I know that this is available in VC++ 6 and previous versions since I already have a similar application built in VC++ 6. Is this something that is not supported under C#? Will I have to create my own methods to do this?
Please help.
LA
Arild Fines
June 3rd, 2002, 10:01 AM
P/Invoke the Win32 functions for INI file reading -- eg GetPrivateProfile*
Leo Ayala
June 3rd, 2002, 11:56 AM
I've tried using some of the methods such as GerPrivateProfileInt() but it was not recognized. Do I need to include some other files/namespaces for this to work?
LA
Arild Fines
June 4th, 2002, 09:39 AM
Look at ms-help://MS.NETFrameworkSDK/cpguidenf/html/cpconconsumingunmanageddllfunctions.htm
codeguru.com | http://forums.codeguru.com/archive/index.php/t-192072.html | crawl-003 | refinedweb | 198 | 61.97 |
This action might not be possible to undo. Are you sure you want to continue?
®
Help for Today, Hope for Tomorrow
July-August2016
2016
January-February
EVOLUTION
An Article of Faith
Answers From a Famous Ex-Atheist About God 8 • Is the Bible True? 17
What’s Behind the Transgender Movement? 22 • A Promise Is a Promise 34
TABLE OF
CONTENTS
July-August 2016
®
FEATURE ARTICLES
4 Evolution: An Article of Faith
The fossil evidence still fails to support Darwin's
theory—showing people accept Darwinian evolution more as an article of faith rather than of fact.
8 Answers From a Famous
Ex-Atheist About God
After a leading atheist devoted a lifetime to arguing against the existence of a divine Creator, what
would make him to reverse his long-held ideas?
12 A Scientist’s Journey to God
Are science and God incompatible? This scientist
shows how studying her chosen field has helped
her develop a deeper relationship with God.
14 An Evolutionary Fantasy:
Useless Body Parts?
Did a process of unguided, random evolution
leave us with unneeded vestigial body parts?
17 Is the Bible True?
The Bible claims to be the very Word of the Creator
who made us. Can we really trust those claims?
22 What’s Behind the
Transgender Movement?
With increasingly mandated acceptance of transgenderism, society is careening off a cliff. How
did we get here? And where will we end up?
26 What Made Britain Great?
And How Did It All Go Wrong?
What has happened to the British Empire, the largest and most powerful empire known to man?
30 The Cup and the Dish
Christ used washing dishes as a metaphor for spiritual cleanliness. How do His vital words apply to us?
33 Visiting Widows and
Widowers in Their Affliction
What can we do to help someone who has lost
a loved one? Here are practical, biblical solutions.
34 A Promise Is a Promise!
We can trust in God and Christ to lead and help
us on the path of life—through the Holy Spirit.
2
Beyond Today
•
B Tm a g a z i n e . o r g
30
36
STUDY SECTIONS
36 Mini-Study: How You Can Correctly
Understand God’s Prophecies and Promises
God has wonderful promises that He will keep, prophecies that tell of
Christ’s return to this earth, when He will begin a time of restoration,
peace and healing.
DEPARTMENTS
20 Current Events and Trends
An overview of events and conditions around the world
29 Letters From Our Readers
Readers of Beyond Today magazine share their thoughts
39 Beyond Today Television Log
A listing of stations and times for the Beyond Today TV program
Photos, from top left: Thinkstock (2), photo illustration by Shaun Venish/Thinkstock, Wikimedia, Scott Ashley Cover: Thinkstock
22
EDITORIAL
What Would Charles
Lindbergh Think?
A
s I was passing through the San Diego Airport
recently after attending a conference, being an
aviation buff I saw something that stopped me
in my tracks. Suspended to one side of the
airport terminal was a plane that I instantly
recognized—a full-size replica of the Spirit of St.
Louis, the plane Charles Lindbergh flew in crossing
the Atlantic Ocean from New York to Paris in 1927.
Today we live in a world that takes air travel for
granted, so it’s hard for us to fathom how great Lindbergh’s achievement
was viewed at the
time. No one had
achieved such a feat
before—flying solo
nonstop across the
Atlantic Ocean for
33½ hours. There
was no margin
for error—failure
meaning nearcertain death.
The 25-year-old Lindbergh was instantly hailed
as a hero around the world. Although a civilian, as a
reserve military officer he was awarded the Medal of
Honor, the nation’s highest military decoration, and
the Distinguished Flying Cross. France awarded him
the Legion of Honor medal, its highest honor.
In New York, he was honored with a massive
ticker-tape parade attended by an estimated 4 million
people. Time magazine chose him as its Man of the
Year. San Diego International Airport is named
Lindbergh Field in his honor. He was a true hero.
After admiring the plane for a few minutes and
contemplating Lindbergh’s monumental achievement, I left to find my gate only to be stopped in my
tracks once again. Only yards from this plane was a
bathroom sign that baffled me.
I was used to seeing the generic universal male and
female symbols on restroom doors, but this one had
both those plus another—a half-male, half-female icon.
“All Gender Restroom,” the sign helpfully explained.
“Anyone can use this restroom, regardless of gender
identity or expression.”
I started to go in, thought better of it, then chose
to go elsewhere. “What would Charles Lindbergh
think?,” I asked myself as I walked away.
Scott Ashley
Managing editor
Indeed, what would Charles Lindbergh think? If
he’d had any idea how radically his country would be
transformed only a generation or two after his death,
I wonder if he might have never wanted to come back
down to earth?
I also thought about how this sign must strike the
hundreds of U.S. military personnel from the nearby
naval base, aviation field and U.S. Marine base who
pass through this airport daily. Is this what they signed
up for, to defend the rights of confused men to use
women’s bathrooms and vice versa?
The contrast between these colliding realities
was disorienting. “What a sick world,” I thought to
myself, to be so blatantly reminded of
our ever-increasing
societal degeneration
and rejection of our
Creator and His
instruction when
all I wanted was a
simple bathroom
break before catching my flight.
Sadly, it’s getting
harder and harder to escape a world in which, as the
prophet Isaiah foretold, people “call evil good, and
good evil” and “put darkness for light, and light for
darkness” (Isaiah 5:20).
As I’ve been working on this issue with its theme
on the futility of Darwin’s theory of evolution, I’ve
thought back to that bathroom sign with its three
genders and how the powers that a year ago forced
homosexual marriage down the throat of the American public are now forcing acceptance of another form
of sexual deviancy. I shudder to think what’s next.
Yet it shouldn’t be surprising, because when many
in society come to believe they’re the result of a series
of random, unguided accidental mutations as assumed
by Darwinian evolution, then nothing is right and
nothing is wrong, and none of it matters anyway.
Ideas have consequences! That’s why it’s crucial
that you arm yourself with the facts about the key
questions of life, including whether God really exists,
whether the Bible is His inspired Word, and what His
plan and purpose is for you. This issue is a good place
to start!
B Tm a g a z i n e . o r g
•
July-August 2016
3
GOD, SCIENCE
& THE BIBLE
Charles Darwin admitted that the available fossil evidence
didn’t support his theory of “survival of the fittest,” better
known as evolution. But he expected that plenty
of evidence would be found in the coming years.
Now, more than a century and a half later, the
evidence still fails to support his theory—showing
people accept Darwinian evolution more as an
article of faith rather than of fact.
by Mike Kelley
EVOLUTION
An Article of Faith
4
Beyond Today
•
B Tm a g a z i n e . o r g
want a God who can tell them what to do wants instead
a theory that explains a creation without a Creator.
A theory full of holes
The past half-century has not been kind to Darwin’s theory.
When his book was published, Darwin admitted that the
fossil record that ought to have supported his theory was full
of gaps, but confidently predicted that a great many of the
missing transitional species would be found to fill these. But
now, more than 150 years later, with paleontologists having
explored vast reaches of the planet, the fossil record still fails
to show the evidence Darwin predicted would be found.
Meanwhile, new discoveries about the vast complexity of
the cell, and the explosion of the field of microbiology, have
added further difficulties that challenge Darwin’s theory.
Today, hundreds of scientists doubt evolution to the point
of rejecting it.
Thinkstock
I
n 1859 Charles Darwin rocked the scientific—and
religious—worlds with his book On the Origin of Species.
It wasn’t long before the scientific communities on both
sides of the Atlantic accepted the notion that life evolved
over “millions on millions of years,” as Darwin put it in
his book, after arising spontaneously from “some warm little
pond,” as he described it in a letter to a friend. Darwin found
himself enshrined as one of the greatest scientific thinkers of all
time, on a par with Galileo and Newton.
But not all who then read Darwin’s book were convinced of
his theory. Other scientists and geologists noted inconsistencies, unexplained creatures appearing at the wrong times in
the fossil record, and other holes in the theory—some of which
Darwin himself acknowledged.
Fast forward 150 years to today, and most of the scientific
world still accepts the theory of evolution as fact in spite of
growing evidence to the contrary. And a world that does not
The Discovery Institute, a Seattlebased think tank devoted to the critical
examination of evolution, lists more
than 700 leaders of scientific thought
who have gone on record as doubting
the theory to the point that many of
them now admit a belief in some higher
intelligence as the most logical source of
the existence of life.
Lost in the furor over evolution is the
fact that the theory of evolution is not a
fact, but is unproven. A scientific theory
is a reasoned explanation that appears
to fit all the facts at hand but that cannot
be tested and verified as scientific law
through the observed results of repeated
experiments according to the scientific
method. Evolutionary theory cannot be
verified through observation because
it has supposedly transpired over eons
of time—and therefore must remain a
theory rather than a proven law.
However, Darwinian evolution should
not truly be classed as a scientific theory
since it does not really fit the evidence
that exists in a reasonable way. Evolution
might be better termed a hypothesis—an
educated guess—about how the vast
array of life we see in the world around
us came to exist. Except it’s not so
educated as it’s touted to be since its pro-
Along came Copernicus in the mid1500s showing mathematically that the
Sun is the center of the solar system and
that the earth and other planets revolve
around it. This was confirmed by Galileo
and his new invention, the telescope,
around 1600.
That revolutionary discovery was
initially rejected by most, and Galileo
was severely persecuted. In like manner
scientists who challenge evolution today
often endure ridicule from their colleagues for acknowledging publicly the
growing anti-evolutionary discoveries.
The “Cambrian explosion”
One of Darwin’s two central ideas was
what the scientific community terms
universal common descent. Basically,
this says that all life forms ultimately
descended from a single common
ancestor, which Darwin estimated to
have appeared on the scene somewhere
between 700 and 800 million years ago.
If his theory were accurate, the fossil
record should show millions of evolving
life forms over millions of years, resulting in billions of fossils of transitional
life forms. However, as already pointed
out, he had to admit in his book a major
problem with the fossil record that he
More than 150 years later, with paleontologists having explored vast reaches of the
planet, the fossil record still fails to show the
evidence Darwin predicted would be found.
ponents dismiss considerable evidence
to the contrary.
And the theory offers nothing to
explain how the vast universe with its
“building blocks” of life—not to mention
the laws of physics, chemistry and biology
that govern it all—came to be. For this
reason, the concept is more properly
a philosophy or, as we’ll see, almost a
religion of sorts—a false religion.
New discoveries that chop away at the
trunk of the tree of evolutionary thought
have created a situation reminiscent of
the Late Middle Ages, when medieval
thought held that the earth is the center
of the universe. The prevailing view in
the year 1500 was that the sun, moon,
planets and all the stars revolved around
the earth.
could not explain: “As by this theory
innumerable transitional forms must
have existed, why do we not find them
embedded in countless numbers in the
crust of the earth?” (1859, Masterpieces
of Science edition, 1958, p. 137).
Later in his book he again acknowledged the problem with the fossil
record: “Why then is not every geological
formation and every stratum full of such
intermediate links? Geology assuredly
does not reveal any such finely graduated
organic chain; and this, perhaps is the
most obvious and gravest objection which
can be urged against my theory” (p. 260261, emphasis added throughout).
Instead of the “innumerable transitional
forms” predicted by Darwin’s theory, the
actual fossil record is vastly different.
The record showed that suddenly,
nearly 600 million years ago by paleontologists’ dating, an explosion of life
forms occurred during what came to
be called the Cambrian period. Over
a period of just a few million years in
paleontologists’ chronology, a mere blip
in the purported geological record of
the earth, thousands of new creatures
appeared that exhibited a high level of
anatomical sophistication.
What had Darwin really stumped
was the fact that no fossil evidence of an
evolutionary ancestor, no “missing link”
for these complex creatures, could be
found anywhere, on any continent.
Again, Darwin acknowledged huge
gaps in the fossil record that should’ve
supported his theory. As he stated:
“The explanation lies, I believe, in the
extreme imperfection of the geological
record” (p. 261). He hoped and assumed
that future scientists would discover the
missing links.
But, notes journalist George Sim
Johnston: “This is the verdict of modern
paleontology: The record does not
show gradual, Darwinian evolution.
Otto Schindewolf, perhaps the leading
paleontologist of the 20th century, wrote
that the fossils ‘directly contradict’
Darwin. Steven Stanley, a paleontologist
who teaches at Johns Hopkins, writes
in The New Evolutionary Timetable that
‘the fossil record does not convincingly
document a single transition from one
species to another’” (“An Evening With
Darwin in New York,” Crisis, April 2006,
online edition).
Since Darwin’s time, millions of new
fossils have been discovered representing
thousands of different species, but none
have been shown to be the missing links
he had hoped would be found. In his
1991 book Darwin on Trial, Dr. Phillip
Johnson wrote: “The single greatest
problem which the fossil record poses for
Darwinism is the ‘Cambrian Explosion’
of around 600 million years ago. Nearly
all the animal phyla (major classifications of animals) appear in the rocks
of this period, without a trace of the
evolutionary ancestors that Darwinists
require” (p. 54).
What the fossil record does show are
soft-bodied worms, jellyfish and similar
creatures that lacked any sort of skeletal
structure. Then a relatively short time
later, a myriad of much more sophisti-
B Tm a g a z i n e . o r g
•
July-August 2016
5
GOD, SCIENCE
& THE BIBLE
cated creatures having external skeletons, internal organs, and
hearts burst onto the scene. In fact, most if not all of the basic
body plans of animals alive today were present in the Cambrian period, in stark contrast to what Darwin had theorized.
Also in 2013, in a review of an article about the Cambrian
explosion in the magazine Science, Stanford University’s
Christophe Lowe commented on the scientific world’s struggle
to explain the sudden burst of life in evolutionary terms. “The
range of hypotheses proposed to explain the Cambrian explosion is as diverse and broad as the fossils they seek to explain.”
He went on to verify that this huge group of new Cambrian
animal species did indeed appear suddenly, and that the few
pre-Cambrian fossils were not their ancestors.
Scientists have put forward many theories to try to explain
away the Cambrian explosion and other ways the fossil record
contradicts Darwinian evolution, but it still remains an
enormous challenge to evolutionary thinking.
Are mutations really helpful?
Darwin’s other main point, natural selection, gave rise to
the phrase “survival of the fittest”—an expression widely used
in today’s business and political worlds. According to Darwin,
mutations would arise in life forms, bringing new traits,
and the more advantageous of these would by the process of
natural selection be passed on to successive generations. These
were “the fittest,” and those lacking such traits would die out.
To Darwin it seemed simple enough: Genetic changes or
improvements that gave an animal an edge in survival would
be the most likely to be passed on. But is that the way most
mutations work?
Decades spent in the study of mutations have shown that
most are harmful, leaving the animal less likely to survive. So
one might ask, what are the chances of favorable mutations
actually being passed on? Or to state it differently, what are the
odds that the DNA of a particular creature would be improved
through random occurrence and successfully passed on to
new generations?
Dr. Murray Eden, professor of engineering and computer
science at Massachusetts Institute of Technology in Boston,
delved into this question. He compared DNA to a computer
code, noting that any computer code would be rendered useless with only a few random changes:
“No currently existing formal language can tolerate random
changes in the symbol sequences which express its sentences.
Meaning is almost invariably destroyed” (“Inadequacies
of Neo-Darwinian Evolution as a Scientific Theory,” Mathematical Challenges to the Neo-Darwinian Interpretation of
Evolution, 1967, p. 14). In other words, the need for specific
arrangement of DNA sequences makes it extremely improbable
that random mutations would generate new functional genes.
Since it has been determined that mutations occur only
once in about every 10 million DNA copies, a logical question
might be: What are the odds of a beneficial mutation happening on its own, at random, with no guidance?
In Darwin’s Doubt, Dr. Stephen Meyer commented on Eden’s
findings: “Did the mutation and selection mechanism have
enough time—since the beginning of the universe itself—to generate even a small fraction of the total number of possible amino
acid sequences corresponding to a single functional protein of
6
Beyond Today
•
B Tm a g a z i n e . o r g
that length? For Eden, the answer was clearly no” (2013, p. 176).
Following the evidence wherever it leads
Ongoing discoveries about the astounding complexity of
DNA continue to provide solid evidence for the divine creation
of life. In fact, it was an objective look at DNA that led the late Sir
Antony Flew, long the leading atheist in England, to renounce
his atheism and accept the existence of a divine Creator.
He acknowledged that he had changed his mind about a
Creator “almost entirely because of the DNA investigations.”
He explained: : How the World’s Most Notorious
Atheist Changed His Mind, 2007, p. 75).
He went on to say: .”
He concluded that when it came to assessing the evidence of nature, “We must follow the argument wherever it
leads”—which in his case was to the conclusion that the only
reasonable and logical answer is a Divine Creator (pp. 88-89).
(Be sure to read “Answers From an Ex-Atheist About God,”
beginning on page 8.)
Startling admissions from evolutionists
The weight of evidence against Darwinian evolution is
growing in biology, genetics and the fossil record itself. To
their credit, some advocates of evolution openly admit some
of the problems—as evidenced by the following comments.
David Raup, former curator of geology at Chicago’s Field
Museum of Natural History, put it this way nearly 40 years
ago: “Well, we are now about 120 years after Darwin [and now
nearly 160], and knowledge of the fossil record has been greatly
expanded . . . [yet] ironically, we have even fewer examples of
evolutionary transition than we had in Darwin’s time.
“By this I mean that some of the classic cases of Darwinian
change in the fossil record, such as evolution of the horse in
North America, have had to be discarded or modified as a result
of more detailed information—what appeared to be a nice,
simple progression when relatively few data were available now
appears to be much more complex and much less gradualistic”
The weight of evidence against Darwinian
evolution is growing. To their credit, some
advocates of evolution openly admit some
of the problems.
(“Conflicts Between Darwin and Paleontology,” Field Museum of Natural History
Bulletin, January 1979, pp. 22-25).
He also later admitted: “In the years
after Darwin, his advocates hoped to
find predictable progressions. In general,
these have not been found—yet the
optimism has died hard, and some pure
fantasy has crept into the textbooks”
(Science, July 17, 1981, p. 289).
Steven Jay Gould, Harvard University
paleontologist and ardent evolutionist,
wrote in his book The Panda’s Thumb:
“The extreme rarity of transitional forms in
the fossil record persists as the trade secret
of paleontology . . . Gradualism [evolutionary change over long periods of time] was
never ‘seen’ in the rocks” (1977, p. 181).
The fossil evidence forced Gould to
admit in a 1980 essay that the traditional
view of Darwinian evolution isn’t supported by the fossil evidence and “as a
general proposition, is effectively dead,
despite its persistence as textbook orthodoxy” (“Is a New and General Theory
of Evolution Emerging?” Paleobiology,
Winter 1980, p. 120).
C.P. Martin of McGill University
in Montreal wrote: “Mutation is a
pathological process which has had little
or nothing to do with evolution” (“A
Non-Geneticist Looks at Evolution,”
American Scientist, January 1953, p. 100).
All of these men have strongly supported evolution. But they, and others like
them, frankly acknowledged some of the
uncomfortable facts that contradict the
theory. Yet unlike Antony Flew, mentioned earlier, they were not willing to
follow all of the evidence to its logical end.
An article of faith
Deeply held beliefs are hard to give up.
Just as those who believed 400 years ago
that the sun revolved around the earth
opposed the new truth that the earth
actually revolves around the sun, so
today do most scientists refuse to accept
modern findings on the origins of life.
The evolutionary paradigm so governs
their thinking that they are unable to
objectively see other alternatives.
How has the scientific community
reacted to the growing weight of
evidence against evolution? The answer:
They practice the same sort of denial of
which they accuse religion—they accept
evolution as an article of faith.
Notice this admission by biologist
Richard Lewontin regarding his attitude
and that of his scientific colleagues:
“We take the side of” (“Billions and
Billions of Demons,” The New York
Review of Books, Jan. 9, 1997, p. 31)
Kansas State University immunologist
Dr. Scott Todd echoed that sentiment:
“Even if all the data point to an intelligent
designer, such an hypothesis is excluded
from science because it is not naturalistic”
(Nature, Sept. 30, 1999, p. 423).
New Zealand molecular biologist
Michael Denton carefully examined the
main arguments for Darwinian evolution and found them full of errors and
inconsistencies. In his 1985 book Evolution: A Theory in Crisis, he wrote that
the problems with the theory “are too
severe and intractable to offer any hope
of resolution in terms of the orthodox
Darwinian framework” (p. 16).
He concluded, “Ultimately the Darwinian theory of evolution is no more
nor less than the great cosmogenic myth
of the twentieth century” (p. 358).
More recently, a chapter in Dr. Meyer's
book titled “The Possibility of Intelligent Design,” touched on the scientific
world’s refusal to accept any possibility
that intelligence, rather than blind
chance, was involved in the creation of
all life form, including human beings:
“When the case for intelligent design is
made, it’s often hard to get contemporary
evolutionary biologists to see why such an
idea should even be considered . . . Though
many biologists now acknowledge serious
deficiencies in current strictly materialistic
theories of evolution, they resist considering alternatives that involve intelligent
guidance, direction or design” (p. 337).
In other words, those who cling to the
evolutionary theory refuse to see and
accept the plain evidence. “The fool has
said in his heart, ‘There is no God,’” says
your Bible (Psalm 14:1; 53:1). A spiritually blinded, deceived and materialistic
world will go to extreme lengths to deny
the existence of the Creator.
Do we see any parallels between
today’s agnostic scientific community
and the philosophers of the apostle
Paul’s time? Paul said of them: “Professing to be wise, they became fools, and
changed the glory of the incorruptible
God into an image made like corruptible
man, or of birds and animals . . . As they
did not like to retain God in their knowledge, God gave them over to a debased
mind . . .” (Romans 1:22-23, 28).
Where, then, is your faith? Is it in the
evidence of the fantastically complex
creation you can see around you, or
in a discredited theory riddled with
problems? Critics of religion says it takes
faith to believe in a divine Creator. But
in fact it takes far more faith to believe in
evolution—indeed, blind faith!
What about you? Do you have the
faith required to still believe in evolution? Or are you willing to truly examine
the evidence?
LEARN MORE
What are some of the problems with Darwin’s theory of evolution?
We seldom hear about them, but there are many more than those
summarized in this article. You need to know the truth! Download or
request our free study guide Creation or Evolution: Does It Really Matter What We Believe? to get the seldom-seen other side of the story.
A free copy is waiting for you!
BTmagazine.org/booklets
B Tm a g a z i n e . o r g
•
July-August 2016
7
GOD, SCIENCE
& THE BIBLE
When you’ve devoted a lifetime to arguing against the existence
of a divine Creator, it can be hard to admit you were wrong. So what
compelled one of the world’s leading atheists to do just that?
I
by Mario Seiglie
magine for a moment being one of the world’s
foremost atheists. You’ve basked in the fame of
academic circles for 50 years and have written
more than 30 books, many of which are hailed
as hallmarks of atheistic thought. You’re highly
respected, honored as one of the world’s brightest minds.
Then, suddenly, you announce you have reversed
course and now believe in God.
You can imagine the reaction from most of your colleagues and the secular press—mostly anger, scorn and
a withering hail of criticism.
What made you sacrifice your reputation and good
standing among many of your peers, knowing full well
how unpopular your belief in God was going to be, especially in an increasingly secular and atheistic society?
It’s a fascinating story, and one that holds many valuable answers for young and old alike who have asked the
most basic and most important question: Does God exist?
It’s not often that you can view this topic from the
other side of the aisle—from one who had been a
champion of atheistic thought and had based his life and
8
Beyond Today
•
B Tm a g a z i n e . o r g
teachings on the premise that God did not exist.
Who is this person? His name is Dr. Antony Flew, an
Oxford professor who spent 50 years teaching philosophy and constructing clever arguments to support an
atheistic point of view.
Why did he change his mind? And more importantly,
why did he go public about his acceptance of God’s
existence, knowing the damage to his reputation among
his colleagues that would follow?
Prior to his death in 2010, Dr. Flew wrote a book in
2007 titled There Is a God: How the World’s Most Notorious Atheist Changed His Mind, explaining why he had
reversed his long-held position and what had compelled
him to admit he had been wrong. It’s not often that we
see a premier philosopher who was an atheist explain
why he changed his mind and came to believe in a divine
Creator. His reasons are great answers to those who
question God’s existence.
A principle to guide your life
In his book Dr. Flew mentioned that early in life, he
came on a principle that would guide
his career: Follow the evidence wherever
it leads, no matter how unpopular that
may be.
In his youth, he thought the evidence
at that time backed an atheistic perspective, namely, that the scientific data and
philosophical reasonings pointed more
toward a belief that God did not exist.
Yet, he mentioned, from the 1980s
on, the evidence started turning against
atheism and toward a Creator God.
Dr. Antony Flew
as Plato in his Republic scripted his
Socrates to insist: ‘We must follow the
argument wherever it leads’” (p. 89).
How did the laws of nature come to be?
He admitted that the accumulation
of the evidence in the last two decades
The first of these has to do with the
now supported the existence of a Creator
origin of the laws of nature.
God, and he had the courage, personal
Dr. Flew was quite candid about his
integrity and humility to accept this
former atheistic views on the laws of
nature, which are the standard explana- conclusion—no matter how personally
disagreeable it had been for him.
tion against God’s existence. Yet he
He mentioned that the evidence dealwould later call this type of reasoning
“the peculiar danger, the endemic evil, of ing with the laws of nature increasingly
indicated a Superior Mind was operating
dogmatic atheism” (p. 86).
at a cosmic level.
This is the assumption that things in
“The leaders of science over the
the universe exist as they are and should
last hundred years,” he wrote, “along
be accepted as such without much
with some of today’s most influential
further thought. It had been his defense
against any questions about the ultimate scientists, have built a philosophically
compelling vision of a rational universe
origins of what exists.
that sprang from a divine Mind. As it
He noted: “Take such utterances as,
happens, this is the particular view of
‘We should not ask for an explanation
the world that I now find the soundest
of how it is that the world exists; it is
philosophical explanation of a multitude
here and that’s all’ or ‘Since we cannot
of phenomena encountered by scientists
accept a transcendent source of life, we
and laypeople alike.
choose to believe the impossible: that
“Three domains of scientific inquiry
life arose spontaneously by chance from
matter’ or ‘The laws of physics are “law- have been especially important for me
less laws” that arise from the void—end . . . The first is the question that puzzled
and continues to puzzle most reflective
of discussion.’ They look at first sight
scientists: How did the laws of nature
like rational arguments that have a
come to be?” (p. 91).
special authority because they have a
One of the most enigmatic aspects of
no-nonsense air about them. Of course,
the laws of nature is that these invisible
this is no more sign that they are either
forces act on matter and energy, but
are not matter or energy themselves.
For them to work, they had to be in
place before matter and energy existed,
and they are not tangible objects. To
believe all these intricate laws that act
in unison somehow appeared together
at just the right time, with just the right
force, without some organizing Intellect
behind them, defies logic.
rational or arguments” (p. 87).
“The important point,” Flew brought
As the growing body of evidence in
science and technology pointed increas- out, “is not merely that there are
regularities in nature, but that these
ingly to a more theistic explanation
regularities are mathematically precise,
of the universe, he asserted that these
universal, and ‘tied together.’ Einstein
standard atheistic explanations were
spoke of them as ‘reason incarnate.’ The
becoming antiquated and untenable.
question we should ask is how nature
“My departure from atheism was not
came packaged in this fashion. This is
occasioned by any new phenomenon
certainly the question that scientists
or argument,” he said. “Over the last
from Newton to Einstein to Heisenberg
two decades, my whole framework of
thought has been in a state of migration. have asked—and answered. Their answer
was the Mind of God” (p. 96).
This was a consequence of my continuSo, although it may not be well
ing assessment of the evidence of nature.
known, a number of cosmologists and
When I finally came to recognize the
existence of a God, it was not a paradigm physicists have admitted that the orderly
laws of the universe point to something
shift, because my paradigm remains,
evidence that convincingly led him to
his belief in God.
Photos, from left: NASA, Wikimedia
As the growing body of evidence in science
pointed increasingly to a theistic explanation
of the universe, Dr. Flew found that the standard atheistic explanations were untenable.
He then had to reluctantly reassess his
beliefs.
“I now believe,” he came to admit,
” (There Is a God, p. 88, emphasis
added throughout).
In particular, he offered three lines of
B Tm a g a z i n e . o r g
•
July-August 2016
9
GOD, SCIENCE
& THE BIBLE
“I now believe that the universe was brought into existence by
an infinite Intelligence. I believe that this universe’s intricate laws
manifest what scientists have called the Mind of God.”
bigger and grander than the universe itself!
Flew quoted numerous of these scientists, like the famous
cosmologist Paul Davies, who affirms: “Science is based on the
assumption that the universe is thoroughly rational and logical at all levels. Atheists claim that the laws [of nature] exist
reasonlessly and that the universe is ultimately absurd. As a
scientist, I find this hard to accept. There must be an unchanging rational ground in which the logical, orderly nature of the
universe is rooted” (p. 111).
Flew concluded: ” (p. 112).
How did life originate from non-life?
Flew’s second line of evidence for a belief in God has to do
with the great difference that exists between life and non-life.
“When the mass media first reported the change in my view
of the world,” he related, “I was quoted as” (p. 123).
Pondering over this question, Flew came to the conclusion
that a self-replicating living thing being produced by chance
from non-life utterly defies all odds. Self-replication means
that something has within itself the ability to copy components of its being and pass traits and the mechanism itself
to future generations.
Indeed, that copy has to be so perfectly reproduced that
it can perpetuate itself in turn, and yet it also has to carry
an additional system that permits it to adapt to a changing
environment to improve its chances of survival.
As a philosopher, Flew pointed out: ”
(p. 124).
He came to see that scientists don’t have a satisfying answer
to this question.
“Carl Woese, a leader in origin-of-life studies,” he explained,
10
Beyond Today
•
B Tm a g a z i n e . o r g
“draws attention to the philosophically puzzling nature of
this phenomenon. Writing in the journal RNA, he says, ‘The
coding, mechanistic, and evolutionary facets of the problem
now became separate issues. The idea that gene expression, like
gene replication, was underlain by some fundamental physical
principle was gone.’
“Not only is there no underlying physical principle, but the
very existence of a code is a mystery. ‘The coding rules (the
dictionary of codon assignments) are known. Yet they provide
no clue as to why the code exists and why the mechanism of
translation is what it is.’
“He frankly admits that we do not know anything about
the origin of such a system. ‘The origins of translation, that is
before it became a true decoding mechanism, are for now lost
in the dimness of the past, and I don’t wish to . . . speculate on
the origins of tRNA, tRNA charging systems or the genetic
code’” (pp. 127-128).
Although there is an increasing body of knowledge about
how DNA and RNA work, scientists still don’t have a clue
about how all these coding systems originated, which Flew
concluded do point to a Superior Intelligence at work.
He asked: . . . This,
too, is my conclusion. The only satisfactory explanation for the
origin of such ‘end-directed, self-replicating’ life as we see on
earth is an infinitely intelligent Mind” (pp. 131-132).
Did something come from nothing?
Flew’s third line of evidence is the very existence of the
universe.
In his early years, Flew believed that the universe had
always existed, a popular belief at that time. If something had
always been around, he reasoned, there was no need to bring
up a Creator to explain it. But new scientific discoveries made
him question this premise and whether something could come
out of nothing.
“In fact,” he related, “my two main antitheological books
were both written long before either the development of the
big-bang cosmology or the introduction of the fine-tuning
argument from physical constants. But since the early 1980s,
I had begun to reconsider. I confessed at that point that atheists have to be embarrassed by the contemporary cosmological
consensus, for it seemed that the cosmologists were providing
a scientific proof of what St. Thomas Aquinas contended could
not be proved philosophically; namely, that the universe had
a beginning.
“When I first met the big-bang theory as an atheist, it
seemed to me the theory made a big difference because it
suggested that the universe had a beginning and that the first
Who’s Behind
sentence in Genesis (‘In the beginning, God created the heavens and the earth’) was related to an event in the universe . . .
“If there had been no reason to think the universe had a
beginning, there would be no need to postulate something
else that produced the whole thing. But the big-bang theory
changed all that. If the universe had a beginning, it became
entirely sensible, almost inevitable, to ask what produced this
beginning. This radically altered the situation” (pp. 135-137).
Of course, atheists and secular scientists came up with
counterarguments for the growing evidence for a universe
with a beginning. Over the years all kinds of unlikely
explanations have appeared.
“Modern cosmologists,” he pointed out, ).
Flew found all these arguments to be desperate attempts
and quite unconvincing.
He concluded: “The three items of evidence we have
considered in this volume—the laws of nature, life with its
teleological [or purpose-exhibiting] organization,” (p. 155).
Thus the existence of a divine Creator is a certain fact of
logic. As Scripture attests: “From the beginning, creation in
its magnificence enlightens us to His nature. Creation itself
makes His undying power and divine identity clear, even
though they are invisible; and it voids the excuses and ignorant claims of these people [who would deny Him]” (Romans
1:20, The Voice).
Professor Flew died in 2010, but his intellectual and
philosophical pursuit led him to accept the existence of an
intelligent Creator—a surprising outcome for him, but one
that was based on his lifelong premise that one should follow
the evidence wherever it leads.
We hope his example, as well as the irrefutable evidence
he was compelled to examine, will help others resolve the
question of whether God exists. And by answering in the
affirmative, it is the natural starting point for one’s journey of
faith in developing a relationship with this awesome God who
made us!
LEARN MORE
Professor Antony Flew found himself confronted with the greatest question of all: Does
God exist? He followed the evidence, and
came to a life-altering conclusion. What about
you? Are you willing to look at the evidence?
Download or request our free study guide
Life’s Ultimate Question: Does God Exist?
BTmagazine.org/booklets
Beyond Today?
W
ho’s behind the Beyond Today magazine and
television program? Many readers have wondered
who we are and how we are able to provide Beyond
Today free to all who request it. Simply put, Beyond Today
is provided by people—people from all walks of life,
from all over the world, as enabled by God.
And those people have a common goal—to proclaim
the gospel of the coming
Kingdom of God to all the
world as a witness and to
teach all nations to observe
everything Christ commanded (Matthew 24:14;
28:19-20).
We are dedicated to
proclaiming the same
message Jesus Christ
brought—the wonderful
good news of the coming
Kingdom of God (Matthew
4:23; Mark 1:14-15; Luke 4:43;
8:1). That message truly is
good news—the answer
to all the problems that
have long plagued humankind.
Through the pages of this magazine, on our TV show,
and in dozens of helpful study guides (also free), we
show the biblical answers to the dilemmas that have
defied human solution and threaten our very survival.
We are committed to taking that message to the
entire world, sharing the truth of God’s purpose as
taught by Jesus Christ and His apostles.
The United Church of God has congregations and
ministers around the world. In these congregations
believers assemble to be instructed from the Scriptures
and to fellowship. For locations and times of services
nearest you, contact us at the appropriate address on
page 39. Visitors are always welcome.
For additional information, visit our website:
ucg.org/learnmore
B Tm a g a z i n e . o r g
•
July-August 2016
11
GOD, SCIENCE
& THE BIBLE
A Scientist’s Journey to
GOD
We may have heard it said that science and God are incompatible. But that isn’t the case, as this
scientist shows how studying her chosen field has helped develop a deeper relationship with God.
I
by Kayleen Schreiber
12
Beyond Today
•
B Tm a g a z i n e . o r g
I believe God’s Word is the foundation of truth, being a scientist
does not hinder my relationship with God. My scientific journey
has actually helped me grow closer to God in distinct ways.
I’ve learned to love and pursue truth
So much information is available to us today, but in many
cases people are not held accountable for whether what they
say is true. It’s easy to find information that matches what I
already think is right. It’s easy to find information that makes
up in emotion, bias and curiosity what it lacks in truth.
The scientific method has helped me assess whether I
really love truth. Am I able to admit wrong when I find good
evidence that refutes what I believe? Do I let my pride influence
my opinions?
It’s very difficult to let go of a hypothesis or theory that
I thought really made sense when I get results I don’t expect.
But because science is a systematic process with controlled
experimentation, it is a great mechanism for eliminating lies
and false information. When my hypothesis is disproven,
I must adjust my thinking since science values honesty and
accuracy very highly. God also takes this matter very seriously,
telling us in Proverbs 19:5 that “a false witness will not go
unpunished, and he who speaks lies will not escape.”
I also need to use this same attitude in my spiritual life. The
apostle Paul says that love “rejoices in the truth” (1 Corinthians 13:6). I cannot try to interpret God’s Word to fit what I
want it to say. I must be humble enough to seek God’s truth,
even when it goes against what I think should be true.
Thinkstock
f all goes well, in less than a year I will have earned my
Ph.D. in neuroscience—the study of the brain, spinal
cord and peripheral nervous system. As I went through
eight years of science education, many people have
asked me how I was able to retain my faith in God while
being bombarded by so much science.
It’s important to first define what science really is. It’s easy
to want to avoid science if it seems like a collection of questionable “facts” assembled by scientists who are biased against
God. But science is actually investigation—an organized,
rigorous and ongoing attempt to find truth.
Isaac Asimov, a professor of biochemistry and author of
hundreds of short stories and books,” (interview on Bill Moyers’ World
of Ideas, Oct. 21, 1988).
In addition to understanding that science is a process for
discovery, I also started out with a critical belief: The Bible is
divinely inspired and is the foundation of all truth (John 17:17).
Everything that I hear and learn I compare to what God says.
Without this starting point, my journey would have veered off
course long ago. Albert Einstein said, “Science without religion
is lame; religion without science is blind” (paper for a science,
philosophy and religion conference, Sept. 9-11, 1940).
Because I understand what science really is, and because
And I have found that the ongoing search for truth can be
a challenging, exhilarating experience. It takes work and is
a very humbling process, but in the end it brings godly love
and peace.
peace of God, which surpasses all understanding, will guard
your hearts and minds through Jesus Christ” (Philippians 4:7).
I’ve learned to broaden my perspective
At the end of the book of Job, God asks: “Where were you
when I laid the foundations of the earth? . . . Who has put
One of the reasons I decided to pursue neuroscience was
wisdom in the mind? Or who has given understanding to the
that there is so much still to learn. We are nowhere near
heart?” (Job 38:4, 36)
understanding how the human brain works—we’re still trying
It’s easy to forget what a limited perspective we human
to figure out how the nervous system of a worm works! This is beings have. Science allows us to expand our perspective by a
true in every corner of creation. God made the physical world few orders of magnitude. We can now see cells and molecules
so wonderfully complicated that we will be studying it until
and atoms, as well as galaxies far across the universe. Science
Jesus Christ returns!
enables us to get a small glimpse of the awesome scale of God’s
Here’s a short example: In the brain there are neurons (the
perception. And I like to think about the fact that God can
main cells that talk to each other). They communicate with
see all the different views at once. He sees neurotransmitters
each other by sending chemicals as neurotransmitters (such as flowing across the synapses in your brain; He sees the moons
calcium, dopamine or gamma-aminobutyric acid or GABA)
orbiting Saturn; He sees you.
across synapses, which are little spaces between neurons.
The fact that science allows us to expand our perspective is
The neurotransmitters are sent and received through little
important because it’s so easy to be narrowly focused on our
molecular channels. Sounds simple, right? Except that for each
immediate circumstances. It’s so easy to automatically quesneurotransmitter there are many different types of channels that
tion God when hard things happen in my life: “Why would
respond in different ways based on the environment of the cell,
God allow this to happen? How come God won’t just give me
other surrounding chemicals, the type of cells that are involved, etc. this one thing—I know it would be good for me.”
Just focusing on calcium, there are many, many types of
It’s easy to forget how much bigger a view God has. He knows
calcium channels, which open and close in different environme better than I know myself (1 Kings 8:39), and knows how
ments, deactivate at different times, and serve various purposes everything works together (Job 38). Many times things that
in different areas of cells.
seem so clear aren’t even actually true. Science shows us that.
For each type of calcium channel, there are multiple
I’ve learned to appreciate God’s power and creativity
subtypes of that channel. For each channel subtype, in each
type of cell, scientists must isolate the channel and investigate
The more I study the physical world, the more I see its
it experimentally to discover what properties it has, what its
expression as God’s handiwork, and the more I appreciate how
purpose is, and what happens when it doesn’t work properly.
much God loves diversity and creativity. He created millions of
And that is all just for one tiny molecule! As I delve deeper
species for us to discover, to explore, to take care of.
into understanding God’s creation, it allows me to appreciate
He created a seahorse the size of your fingernail. He created
more just how detailed, how organized, how beautiful the
cuttlefish skin that can camouflage through both color and
mind and creative ability of God is.
texture. He created all sorts of incredible marvels. God is
With the physical world being this complex, how much
involved in every little detail—and the more I learn about
more amazing is the spirit world? We can’t even begin to
His creation, the more I learn about Him. He is caring and
comprehend it! Studying God’s physical creation helps me to
thoughtful and perfect!
maintain an awe and reverence toward Him. As Exodus 15:11
Paul expressed this beautifully: “For since the creation of
exclaims, “Who is like You, O Lord, . . . glorious in holiness,
the world God’s invisible qualities—his eternal power and
fearful in praises, doing wonders?”
divine nature—have been clearly seen, being understood
from what has been made, so that people are without excuse”
I’ve learned to deal with uncertainty
(Romans 1:20, New International Version).
As I noted above, the results we get from science provide
Science itself isn’t scary or inherently bad. God made the
a set of facts, of data. But experiments, perhaps more so in
world, and for me, becoming a scientist has allowed me to grow
biology than the physical sciences, deal with uncertainties and in knowledge, character, humility, respect and appreciation of
random variables.
creativity!
That’s why we have statistics, to help us understand the
likelihood that results of a particular experiment are true.
LEARN MORE
And with time and further testing, we become more certain
As noted in this article, God’s creation offers
about a specific piece of the puzzle that is our world.
astounding glimpses into His thinking, mind
As a scientist, I have come to accept that we don’t have all
and character. We’ve prepared a free study
the answers, but we must continue to grow in our understandguide, Life’s Ultimate Question: Does God Exist?
ing of truth. That is why God’s Word is so comforting and so
that helps you understand much more about
critical. It is one thing of which we can truly be certain. God
Him from looking at His creation. Download
didn’t provide us the answers to everything yet, but He gave us
or request your free copy today!
enough information so that we can have successful lives, grow
BTmagazine.org/booklets
in character and have hope for the future. As we’re told, “The
I’ve learned humility
B Tm a g a z i n e . o r g
•
July-August 2016
13
GOD, SCIENCE
& THE BIBLE
An Evolutionary
Fantasy:
USELESS
Body Parts?
Did chance evolution leave us with unneeded vestigial body parts—
or did a Creator carefully design every part of us? As it turns out,
there are uses for body parts previously assumed to be useless!
I
by Dan Dowd
14
Beyond Today
•
B Tm a g a z i n e . o r g
Despite how impressive and complex our bodies are,
proponents of Darwinian evolution have long insisted
that parts of the human body are useless. They assume
these so-called “vestigial organs” are just leftovers of
man’s evolutionary process that serve no useful bodily
function.
A Discovery News article several years ago featured
a list of supposed “useless” parts in the human body
without any follow-up or consideration as to what value
these body parts could have. Are these “vestigial organs”
really useless body parts, or did a Creator God Himself
design them with important roles to play? Let’s give them
a closer look.
The third eyelid: plica semilunaris
The plica semilunaris, or the “third eyelid” located
in the corner of your eye near the tear duct, looks like
an “extra” fold of skin. Evolutionary theory sees it as
a remnant of an extra eyelid like the kind a lizard or a
shark has. The fact is, it has an important job.
When we wake up we often have gunk in the corner
of our eyes—the result of the plica semilunaris secreting
Thinkstock
f you sat down and counted all the cells in the
human body, you would find more than 10 trillion
(10,000,000,000,000) cells. About 12 billion of
these are nerve cells linked by more than 10 trillion
connections. The body’s cells make up groups of
systems that work together to sustain life—the
skeletal system, the muscular system, the digestive
system, the nervous system, the reproductive system and
the cardiovascular system.
All of these systems have subsystems. For example, the
muscular system has involuntary and voluntary muscles.
Involuntary muscles work without our conscious effort—
such as the cardiac or heart muscle. Voluntary muscles
are muscles that we have to think about to use—like a
bicep muscle that helps us to pick up things.
Not only do the systems in the human body perform
specific tasks, but they also work together to improve the
work of each system. For example, the skeleton provides
the framework to support the body and to protect vital
organs. It also provides mobility to the body and produces
red and white blood cells that move energy through the
body, fight infection and remove waste material.
sticky mucus that collects dust, dirt and other matter on the
surface of our eyes. This debris is gently moved to the corner of
our eye where it can be easily removed. If not removed, it could
scratch or damage our sensitive eyeballs. The plica semilunaris
also provides a first line of defense to prevent the entrance of
microbes into the eye.
understanding about our body hair.
Body hair provides a variety of functions. The hair on our
heads protects us from excessive sunlight and UV radiation as
well as wind damage. The hair of our armpits, genitals and legs
reduces friction. Hair also aids in the sweating process, pulling
the sweat away from our bodies. That helps it to evaporate
more easily, keeping it from sticking to the skin and causing
Adenoids and tonsils
chafing or blistering. Body hair can also redirect sweat to
protect more sensitive areas, like our eyebrows keeping sweat
The tonsils are located on each side of the back of the
out of our eyes.
mouth, and the adenoids are located above the roof of the
Body hair also aids in our sense of touch. Have you ever felt
mouth behind the nose. Darwinian evolution’s supporters
a bug crawling on the hair of your head? We don’t often think
claim that these organs are prone to infection and should be
about it, but a lot of what we feel on our skin is because of
taken out early in life. After all, they think, these perform no
sensations transmitted through hairs.
important function in the body. Not true! Just glance in any
As for the thick body hair that supposedly covered prehismedical reference book. It will show that these organs are
toric man, scientists are guessing at this—making an assumpprone to infection because they trap bacteria as part of the
tion based on their belief that we descended from hairier
lymphatic system.
These two organs are situated where they are because they’re primates. The fact is, the amount of hair we have perfectly
suits our needs.
a vital part of the body’s first line of defense. Doctors have
found that the adenoids and tonsils “sample” bacteria and
Erector pili
viruses that enter through the nose and mouth to help figure
The ability of our hair to stand up comes from the erector
out the body’s response. While they can become infected,
pili (also spelled arrector pili), a muscle
attached to several hair follicles. Evolutionists say we needed this ability when we were
hairier to look bigger and scarier. Now, they
say, it isn’t much good except for giving us
goose bumps.
Yet the erector pili serve many functions.
Pressure exerted by the erector pili help the sebaceous glands
this is more a result of the job they perform rather than being
useless. On top of that, the adenoids have specialized cells that to secrete sebum (a natural skin lubricant), which helps
maintain the integrity of the skin as a barrier (this is also why
make antibodies to help fight infections.
excessive skin washing is not good because it removes this
The tailbone or coccyx
sebum that helps to moisturize and protect our skin).
Sebaceous secretions work with apocrine glands to help
The coccyx serves a very important purpose as an anchor
regulate body temperature. In hot conditions the secretions
for various muscles, tendons and ligaments. While most
medical doctors know how important the coccyx is, many still emulsify and foment formation of and prevent the loss of sweat
drops from the skin. In colder conditions, sebum repels rain
assume it’s a leftover tail from our supposed earlier primate
from skin and hair. This muscle tightening also helps to retain
evolution from monkeys. The coccyx is actually three to five
body heat, while the loosening of these muscles can help to
separate or fused vertebrae. As the lowermost part of the
cool the skin.
spinal column, it is designed to be an anchor while the rest
While we can indeed get “goose bumps,” the previous
of the spinal column remains more flexible.
section elaborated on the many other functions our body hair
If we didn’t have a tailbone for attaching the abdominal
provides. Additionally, God gave us a wide range of emotions
muscles that help us lean back and sit down comfortably,
that can be expressed in many different ways—being frightwe couldn’t function. Many ligaments that cooperate in the
flexing and support of the spinal column attach to the coccyx. ened can cause our hair to stand up, but being cold can make
our hair stand up too. God gave our bodies the ability to
It works along with muscles of the pelvic floor and muscles
that help us walk. The coccyx also offers some protection when give us feedback on our environment and to give physical
we fall on our rear, helping to prevent the more sensitive spinal expression to our mental or emotional state.
column from being damaged.
Wisdom teeth
Body hair
According to accepted evolutionary theory, humans used
to have bigger jaws with 32 teeth. It’s said that what we call
Evolutionists maintain that early humans were hairier
wisdom teeth were needed to chew a rougher diet. Then, as
when they supposedly first branched off from other primates.
man supposedly evolved, the jaws became smaller to match
It’s argued that our ancestors lost the hair over time, not
the softer diet, and the wisdom teeth were no longer needed.
needing as much to keep warm as they learned other ways to
In fact, they’re even a problem in the smaller mouth. Some
keep warm and as their bodies developed better temperature
evolutionists speculate that the extra teeth were needed to
regulation. Thus, our body hair today is taken to be a useless
replace other molars that fell out.
leftover. But let’s look at some of the more up-to-date medical
Are these “vestigial organs” really useless
body parts, or did a Creator God Himself
design them with important roles to play?
B Tm a g a z i n e . o r g
•
July-August 2016
15
GOD, SCIENCE
& THE BIBLE
The appendix
The more research done into the natural
world and understanding our bodies, the
more obvious it becomes that evolution
fails to account for the complexity of life.
Sitting at the junction of the small intestine
and large intestine, the appendix is a thin tube
about four inches long and normally in the
lower right abdomen. As might be expected, evolutionists
claim that the appendix was useful for digestion during our
early plant-eating years, but is now useless since we started
eating more easily digestible foods. Modern medical science is
now admitting this is not the case.
Doctors have discovered that the appendix is very important in the immune system of babies in the womb and young
adults. Loren Martin, professor of physiology at Oklahoma
State University, wrote in Scientific Americ . . . mechanisms.
“During the early years of development . . .” (“What Is the Function of
the Human Appendix? Did It Once Have a Purpose That Has
Since Been Lost?” Oct. 21, 1999).
Some medical doctors believe that the appendix acts as a
storehouse for good bacteria, “rebooting” the digestive system
after diarrheal illnesses.
Evolution’s useless arguments
These examples can’t begin to detail all the arguments
regarding parts of the human body that are considered vestigial,
unnecessary or of “unknown use.” The point is that medical
16
Beyond Today
•
B Tm a g a z i n e . o r g
science eventually finds that these parts do have purposes.
But one great failing of evolutionary thinking is to not
recognize that God has also designed our bodies to adapt
to survive. Can we survive without an appendix, or wisdom
teeth or any other “minor” part of our body? Of course we
can. Humanly, we can adapt to a loss of an appendage, sight
or hearing and still have a productive life. Adaptability is not
evidence of evolution—it is evidence of good design.
The more research and study that’s done in the natural
world around us and in understanding our bodies, the more
obvious it becomes that evolution fails to account for the
complexity and resiliency of life—life that God created. There
is no doubt that the human body is amazing. The more scientists study it, the more complexity they discover. The bodies
of other creatures are of course amazing too—as we all share
the same Designer. The evidence is abundantly clear that God
created us, and we are indeed fearfully and wonderfully made
(Genesis 1:26; Psalm 139:14; Romans 1:20).
God made you and me, and He did so for a purpose. The next
time you hear or read some statement about how evolution has
shaped us, take the time to do the research and find out more
about why God designed our bodies to work the way they do.
LEARN MORE
Darwin’s theory of evolution has a great
number of problems. For more insight into
its many shortcomings, download or request
our free study guide Creation or Evolution:
Does It Really Matter What We Believe? to get
the seldom-seen other side of the story!
BTmagazine.org/booklets
Photos, from left: Thinkstock, Samantha Sophia via UnSplash.com
The major problem with viewing the molars
this way is that evolutionists can’t explain why
a smaller jaw is an advantage to human beings.
Some modern studies have shown that jaw and
tooth development and alignment have a lot to do
with how strong the jaw muscles are. Foods that
require more chewing (not modern processed
foods) have a big determination in how the molars
develop and align.
What’s more, this evolutionary perspective
would have more weight if every case of erupting
wisdom teeth required removal, but studies have
shown that most of the cases involving removal
of wisdom teeth were done preventatively. Studies
have shown that about 80 percent of wisdom
tooth removal was done whether there was a
dental problem or not.
The bottom line is that wisdom teeth should be
treated as any other tooth—useful in chewing
the food we eat, or treated appropriately if they
fail to function properly.
EXPLORING
GOD’S WORD
Is The Bible True?
The Bible claims many things. Most importantly, it claims to be the
very Word of the Creator God. Can we trust the Bible in its claims?
by Darris McNeely
T
he Bible is the most influential book of all
time. It’s been translated into virtually every
language on earth. Millions of copies have been
printed. You probably have a copy somewhere
in your home right now.
The Bible is debated, discussed and all too often dismissed. It is revered and respected—but also reviled by some.
But perhaps the most important question about the
Bible is one that is often taken for granted: Is the Bible
true? Is it what it claims to be—the inspired, divine, holy
revelation of God, the Creator of the universe?
If the Bible is true, and it is what it claims to be, then it
is a book where you can find the deepest truths of life. If
the Bible is true, it’s the source of ultimate truth about the
Creator God and His plan for mankind and for your life.
We at Beyond Today hold this book in high regard as
the ultimate source of truth for life. Every topic we cover
is rooted firmly in the teachings of the Bible, and we want
to help you see that you can have confidence in the Bible
as the guide and authority for how to live a good life of
following God’s lead.
Right now you may be seeking a source of reliable
knowledge to support your belief in the Bible. We know
there are many challenges by skeptics and scoffers. These
questions have been around ever since the first words
were written down. But here is our stand: We take the
Bible as true—not blindly, but based on many documented proofs and a developed trust. In fact, no credible
effort has been made and succeeded in undermining the
validity of this book.
How have we come to the position where we trust the
Bible to be true? There are many reasons supporting what
the Bible itself tells us. Let’s take a look at a few.
Proof 1: Fulfilled prophecy
The Bible records many events that took place in the
past. But most extraordinary is its record of events that
have yet to take place.
The word “prophecy” in our modern day brings with
it baggage. It’s a word used in fantasy novels or movies
where some aged religious guru calls out some future
event. To be honest, when we hear about prophecies
today, it’s often in the context of unbalanced people
proclaiming kooky, unbiblical ideas about the end time.
But prophecy is real, and it’s a major part of the Word of
God. It takes a balanced approach to look into the study of
biblical prophecy. God is the God of history and the future.
He knows what has happened and what will happen.
Look at what God said through the prophet Isaiah:
“I make known the end from the beginning, from ancient
B Tm a g a z i n e . o r g
•
July-August 2016
17
EXPLORING
GOD’S WORD
times, what is still to come. I say, ‘My purpose will stand, and
I will do all that I please’” (Isaiah 46:10, New International
Version 1984, emphasis added throughout).
A proper understanding of prophecy can build faith. When
we see prophesied events come to pass, we know that God is in
charge, and we can be confident in trusting and obeying Him.
The Bible records many prophecies, then shows how they
were fulfilled. The most important ones describe the first
and second comings of Jesus Christ. Prophecies spanning
thousands of years from Genesis to Isaiah to Micah described
Christ’s first coming—even down to naming the town where
He would be born, Bethlehem. The Bible contains more
prophecies about Christ’s coming than about any other event.
And there are many more that describe what His return will
be like. Studying these is an awe-inspiring exercise that builds
faith and helps make sense of the world.
After all, look around—events are happening all over the
world that you may not fully understand but that are already
affecting your life. Bible prophecy tells us where events like
these will ultimately lead.
That decree was a fulfillment of a prophecy made more than
100 years before by the prophet Isaiah. God had said through
him, “I am the Lord . . . who says of Cyrus, ‘He is my shepherd
and will accomplish all that I please; he will say of Jerusalem,
‘Let it be rebuilt,’ and of the temple, ‘Let its foundations be
laid’” (Isaiah 44:24, 28, NIV 1984).
Not only did God foretell what Cyrus would do more than a
century before it happened, but He even called him out by name!
biblical record. Let’s consider a specific example—the story of
Cyrus, king of Persia.
In 539 B.C., Cyrus captured and took over the kingdom
of Babylon. It was Babylon that had captured Jerusalem and
deported its people less than 100 years earlier, ending the
Kingdom of Judah. Babylon’s policy was to remove conquered
people from their homeland and resettle them elsewhere. Most
of the Jews had been resettled in the area of Babylon. When
you read the story of Daniel, you see a man caught in this web.
Cyrus’ policy was different. He allowed peoples he defeated
to remain in their homeland. As he took power from Babylon, he let those peoples already conquered go back to their
homelands. Ezra 1 records that Cyrus made a proclamation
allowing the Jews to return to Jerusalem and rebuild the city.
If you hear the words “science” and “Bible” in the same sentence, it’s usually because someone is talking about a conflict
between them. But it doesn’t have to be that way.
The Bible is certainly a book that has to be understood and
accepted by faith. But, as mentioned, this faith is not to be
ignorant or unreasoning. God doesn’t want us to stop using
our brains and intellects to be able to believe that the Bible is
His Word. We don’t have to ignore the advances of science to
still believe in God.
If we keep an open mind when looking at the Scriptures
and understand how science works, it’s plain that the Bible
is sensible, consistent and logical and that a proper reading
doesn’t contradict what we see in the creation God made.
We need to keep two things in mind when talking about
Archaeology substantiates the biblical record
Now this is where it gets interesting. In 1879, a clay cylinder
was discovered in modern-day Iraq, with the story of Cyrus’
conquest of Babylon. Clay tablets were the notebook or text file
of the ancient world, recording events and important details to
be remembered later.
The Cyrus cylinder, which can be seen at the British Museum,
has been studied and verified by top scholars. This fascinating
ancient historical document corroborates the biblical account in
Ezra of Jews returning to Jerusalem along with treasures taken
from God’s temple. More than that, it supports Isaiah’s amazing
prophecy from more than 100 years prior to Cyrus.
Prophecy should motivate us to change
Some Bible critics say that it was someone who lived long
Another impactful reason for prophecy is to change who you after Isaiah’s death and the fall of Babylon who wrote that
are. There is an essential scripture we should read to keep a
prophecy. They argue that referring to Cyrus by name more
balanced approach to prophecy:
than a century in advance is impossible. A modern secular
“But the day of the Lord will come as a thief in the night, in mind cannot admit that the revelation of God to a prophet
which the heavens will pass away with a great noise, and the
could be legitimate. But the facts speak otherwise.
elements will melt with fervent heat; both the earth and the
For one, the cylinder uses language similar to Isaiah. Like
works that are in it will be burned up. Therefore, since all these Isaiah’s prophecy, Cyrus calls himself a shepherd. And instead
things will be dissolved, what manner of persons ought you to be of giving the Persian god credit for his victory over Babylon,
in holy conduct and godliness . . . ?” (2 Peter 3:10-12).
Cyrus gives it to the Babylonian god Marduk—just as he
We can trust that God is working in our lives, and we need
gives credit to the God of the Jews in his proclamation for
to do our part in overcoming sin and becoming more like
them recorded in Ezra, demonstrating in both cases a policy
Him. The Bible makes claims like no other book. God reveals
of cultural pluralism and promotion of the religions of the
future events. Some of the events revealed in advance have
conquered. He would have allowed Jews to return and build
already come to pass.
the Jerusalem temple under this policy.
We could examine many additional fulfilled prophecies to
Here is evidence from archaeology that backs up Bible
help prove that this book is true, but we don’t have room in
prophecy in a remarkable way. Archaeological discoveries in
this article to cover them in enough detail to do them justice.
Israel, Egypt, Jordan, Turkey and other lands where the Bible’s
If you want to know more, I encourage you to search our
events occurred have unearthed mountains of information
website ucg.org/learnmore for Nebuchadnezzar’s dreams, the
shedding light on the people, places and events recorded in the
prophecy of Daniel 11 and prophecies of the Messiah.
Bible. This is verified in many well-documented and reputable
books, papers and museum exhibits.
Proof 2: Archaeological evidence
Biblical archeology is a field that supports prophecy and the Proof 3: Scientific discoveries
18
Beyond Today
•
B Tm a g a z i n e . o r g
this subject. First, science is simply a process for learning
about the observable universe. Second, the Bible is primarily
a book for spiritual guidance.
Scientists are constantly studying, testing and reviewing
their conclusions to ensure that what they’ve discovered is
correct. The scientific method itself is the logical workflow of
wondering about something, testing whether that something
is true, and then retesting to double-check. In the modern
world, the way we make scientific progress is by making a
new discovery, forming a new conclusion, then having others
review and test that conclusion to ensure its validity.
Very often, new discoveries are made that modify and in
some cases overturn conclusions made before. In fact, that is
a point that scientists take pride in: Science constantly revises
its conclusions, each time getting closer and closer to the truth
through observation and experimentation.
Something else to remember is that the Bible is primarily
things for treating disease or illness. If the law of Moses were
simply something given by the man Moses, and not divinely
revealed by God, no doubt Moses would have relied on his
40 years of growing up in Egyptian society in giving basic
health advice. Instead, he gave sanitation principles that were
thousands of years ahead of their time!
Major disease epidemics through history have been caused
because people didn’t understand that good hygiene is essential to good health. In the early 1800s, a major cholera outbreak started in India and over the course of 20 years spread
all the way to the United States, killing thousands of people
along the way. Cholera breakouts are almost always linked to
improper sewage disposal. In many places around the world,
sewage would flow openly in the streets and contaminate the
water supply.
In stark contrast with that principle, God clearly instructed
Israel on how to dispose of waste: “Designate a place outside
the camp where you can go to relieve
yourself. As part of your equipment have
something to dig with, and when you relieve
yourself, dig a hole and cover up your excrement” (Deuteronomy 23:12-13, NIV 1984).
If mankind had listened to God’s instructions, recorded by Moses thousands of years
before, countless lives could have been saved.
There are many more examples like this, where scientific
discoveries of modern times have proven principles in the
Bible that are plain for anybody to read.
The Bible is a book that has to be understood and accepted by faith. But this faith
is not to be ignorant or unreasoning.
a book of spiritual guidance. It’s a book intended to give us
meaning for life, to reveal God as our Creator and Heavenly
Father, and to give us a road map for how to live. It isn’t meant
as a science textbook.
That being said, it is an accurate description of nature from
man’s perspective. In scriptures that comment on the physical nature of the universe, a proper reading shows that what
is written does not attempt to contradict what we can prove
through scientific observation.
I hope you can see how science and the Bible can work
together. Science seeks to reveal the physical truths of what we
can observe, and the Bible gives answers about the meaning of
life and the spiritual mysteries that we as human beings have
wondered about for all of history—as well as providing some
factual information about the physical realm.
Bible authors were aware of medical and scientific principles
Let’s look at an example of how biblical authors had a
sometimes-remarkable understanding of the physical world.
It’s an example from the life and work of Moses, who lived
about 3,500 years ago.
Moses was raised in an Egyptian home as an adopted son in
the house of Pharaoh. Modern archaeologists have uncovered
many artifacts from Egypt in Moses’ day, and one of those is
an Egyptian medical document. What it describes about how
Egyptian doctors treated illness and wounds would make us
feel sick today.
Some examples include using statue dust, beetle shells,
mouse tails, cat hair, pig eyes, dog toes, eel eyes and goose guts
for medicinal purposes. If somebody had a splinter, they would
treat it with a salve of worm blood and donkey dung. We
know today that dung is full of tetanus spores, so somebody in
ancient Egypt might have died from lockjaw simply due to the
way a doctor would have tried to treat a splinter.
Yet the Bible doesn’t prescribe any of those disgusting
The Bible is true—believe it and live it!
The Bible is true. It is what it claims to be—the inspired,
divine, holy revelation of God. It is the source of truth and
wisdom and the path to know the Creator of the universe. It is
an accurate record of the history of the world, both in what has
already happened, and in what will happen in the future.
As we study these things, we have to take a close look at the
lives we lead. We have to analyze what changes we need to make.
God has given us an instruction book on life. Through His
Word, He foretells the events that will lead up to the return of
Jesus Christ. What are you going to do about it? It’s between
you and God. We can only show you where to go to find
truth—the Bible. It’s now in your hands.
Pick up your Bible today. Read its words and believe them.
Pray to God and ask Him to open your mind and heart to
understand what He’s saying to you in its pages. Ask Him to
show you how you should respond to its message.
The Bible is true. When you believe it and put it into practice
and live it, it will change your life!
LEARN MORE
This article has touched on only a few of the
many proofs that the Holy Bible is indeed
God’s inspired revelation to humankind. You
can find many more in our free study guide Is
the Bible True? Download or request your free
copy today!
BTmagazine.org/booklets
B Tm a g a z i n e . o r g
•
July-August 2016
19
Current Events & Trends
WORLD NEWS
& PROPHECY
Push for European military and growing superstate
G
ermany released a statement to the European
Union (EU) pushing for a joint effort in military
power. The EU has been a major economic power
since the 1990s, though the euro has had its ups and
downs in the market. Germany has been the stabilizing
factor for countries struggling financially again and
again, and knows its stability gives it the credibility
to suggest creating a military power.
“‘German security policy has relevance—also
far beyond our country,’ the paper states. ‘Germany
is willing to join early, decisively and substantially as
a driving force in international debates . . . to take
responsibility and assume leadership’” (“Germany to
Push for Progress Towards European Army,” Financial
Times, May 2, 2016).
Creating a centralized military would be beneficial for the European Union on a few levels. Primarily, a joint military would minimize redundancy of
efforts—combining efforts could mean more efficient
use of funds for one centralized military force.
“The paper says that the EU’s defense industry is
‘organized nationally and seriously fragmented,’ raising costs, handicapping it in international competition and making it difficult for national militaries to
German Leopard 2 battle tanks on maneuvers.
operate together. It is therefore necessary that military
capabilities are jointly planned, developed, managed,
procured and deployed to raise the interoperability
of Europe’s defense forces and to further improve
Europe’s capacity to act,’ the paper states” (ibid.).
Revelation 13 and 17 talk about a time before
Christ’s return when leaders will unite to create a
political power called “the Beast.” Putting together
the evidence of biblical prophecy and world history, we
can see that this joint power will arise out of Europe.
From the ashes of the Roman Empire is to eventually
come a political, financial and military superpower
that will be in control at the return of Jesus Christ.
Islamic State puts Israel in its sights
D
espite daily U.S. and allied forces’ airstrikes, the self-proclaimed Islamic State
in Iraq and Syria (ISIS) continues to seek the destruction of the Syrian state and
the entirety of the current Middle East order. The focus on media attention
is on ISIS’ military campaigns and the intense violence that is their calling card.
The terrorist organization is waging war on another front and in another way,
however, and ISIS’ endgame holds potential danger for the state of Israel. The Washington Times reports about the next step in ISIS’ push to dominate the Middle East:
” (Rowan Scarborough, “Islamic State Aims to Destroy Israel, ‘Liberate’
Jerusalem With Sinai Peninsula Terrorist Force,” May 22, 2016).
An armed guerrilla force on the Egyptian-Israeli border would mean immediate
danger for both Egyptian and Israeli citizens. ISIS has already promised violence
against both groups in pursuing its goal of conquest:
“Islamic State propaganda promises recruits that they will one day ‘liberate’
Jerusalem and end the state of Israel, according to analysis by the Middle East Media
Research Institute [MEMRI], which tracks jihadi communications. The Egyptian
army, the force standing in the way, is threatened with beheadings if soldiers
continue to fight” (ibid.).
If Syria falls and Sinai is overrun, Israel may face a future of being surrounded
by extremist forces in both the north and south. To see how things will ultimately
turn out in the region, be sure to read our free study guide The Middle East in Bible
Prophecy. (Source: The Washington Times.)
Children of married parents show higher self-esteem
A
new study in the United Kingdom shows that
children whose parents are married have the
highest self-esteem. mari-
20
Beyond Today
•
talEsteem, Says New Research,” Daily Mail, May 29, 2016).
These results should come as no surprise for
anybody who understands the biblical basis for marriage. Marriage is a physical institution that in various
B Tm a g a z i n e . o r g
The financial partnership of the euro was a step
towards this power. A combined military would be a
huge leap that could be the next step in becoming a
supranational power on the world scene. History is in
the making with this German proposal. Europe is moving closer to the political and military integration the
book of Revelation describes of the end time.
We should consider a recent warning from former
London mayor Boris Johnson, as reported in the
London Telegraph: “The European Union is pursuing
a similar goal to Hitler in trying to create a powerful
superstate, Boris Johnson says . . . He warns that while
bureaucrats in Brussels are using ‘different methods’
from the Nazi dictator, they share the aim of unifying
Europe under one ‘authority.’ . . .
“The former mayor of London, who is a keen
classical scholar, argues that the past 2,000 years of
European history have been characterized by repeated
attempts to unify Europe under a single government
in order to recover the continent’s lost ‘golden age’
under the Romans” (Tim Ross, “Boris Johnson: The EU
Wants a Superstate, Just as Hitler Did,” May 15, 2016).
Many have decried his warning as absurd. But
prophecy marches on. Read our free study guide The
Book of Revelation Unveiled to learn more. (Sources:
Financial Times, The Telegraph.). Read it online at
or request it from one of our offices listed on page 39.
Muslim terrorist carries out most deadly shooting in U.S. history
T
he largest terrorist attack in the United States
since 9/11 of 2001, and the largest mass shooting
in U.S. history, occurred June 12, 2016, at a gay
night club in Orlando, Fla. A radical jihadist who swore
allegiance to the Islamic State (ISIS) gunned down 49
people and injured another 53 before he was killed
by police. ISIS quickly took responsibility, calling the
perpetrator “an Islamic State fighter.” The attack followed calls by ISIS leaders to carry out attacks on nonMuslims during the Islamic holy month of Ramadan.
Yet when President Barack Obama spoke in
response immediately after, he would not refer to
what happened as Islamic terrorism, following his
established pattern, and used this as an opportunity
to declare the need for more gun control.
Republican presidential candidate Donald Trump
quickly said the president should resign if he could
not identify the country’s enemy here. He also leveled criticism against his Democratic opponent in the
presidential race, Hillary Clinton, for similarly refusing
to make the identification—though she then said
she could use such terminology but did not want to
label the whole religion of Islam.
Regarding Obama not mentioning the words “radical Islamic terrorism,” Trump said, “He doesn’t get it or
he gets it better than anybody understands. It’s one or
the other, and either one is unacceptable” (CNN, June
13, 2016).
The attack has thrown the progressive left into a
quandary. Had a “right-wing Christian” perpetrated
Police respond to the June 12 shooting at a gay
bar in Orlando, Fla., in which more than 100
patrons were gunned down by a Muslim male who
had proclaimed allegiance to the Islamic State.
Deadly superbug reaches United States
Photos, from left: Wikimedia, City of Orlando Police Department
“F
or the first time, researchers have found a person in the United States carrying
bacteria resistant to antibiotics of last resort, an alarming development that
the top U.S. public health official says could mean ‘the end of the road’ for
antibiotics.” So began a May 27 Washington Post article titled “The Superbug That
Doctors Have Been Dreading Just Reached the U.S.” (Lena Sun and Brady Dennis).
The bacteria strain was detected in a Pennsylvania woman and has proven
resistant to the antibiotic colistin, “the antibiotic of last resort for particularly
dangerous types of superbugs.” Such bacteria can kill up to half those infected
with it.
While the particular strain found in the woman can be treated with other
antibiotics, “researchers worry that its colistin-resistance gene, known as mcr-1,
could spread to other bacteria that can already evade other antibiotics.”
The director of the Centers for Disease Control and Prevention, Tom Frieden,
said in an interview that this development ,” he continued. “I’ve cared for patients for
whom there are no drugs left. It is a feeling of such horror and helplessness. This
is not where we need to be” (ibid.).
Lance Price, director of the Antibiotic Resistance Action Center and a George
Washington University professor, said in a statement about the case: “It’s hard
this act, it would have been used to vilify all conservative Christians who want to deny gay rights. But as it
was a Muslim, great effort is made to distance him
from “authentic Islam”—lest the left’s alliance
with Islam against traditional Christianity be put in
jeopardy. The shooter was portrayed as a troubled,
unbalanced person—but what Islamic terrorist is
not? The real problem is that, while those like him are
in the minority among Muslims, they still constitute a
sizable number of people—perhaps millions.
This should be a wake-up call to those on the left.
Radical Islamists are not their friends. In a number of
Islamic countries, homosexual behavior is punishable
by death and gays are routinely jailed and/or executed.
Commentator Mark Steyn said: “The arithmetic isn’t that
complicated. The more Islam the fewer gays . . . In the
end, you have to pick and choose which squares you
want in your diversity quilt” (SteynOnline.com, June 13).
While we obviously do not agree with the lifestyle
of those who were attacked, we stand outraged and
saddened over this murderous attack. And we pray for
a world of peace and right understanding under the
rule of the Kingdom of God. (Sources: CNN, Fox News,
SteynOnline.com.)
to imagine worse for public health in the United States. We may soon be facing
a world where CRE [antibiotic-resistant bacteria] infections are untreatable.”
Researchers and medical personnel have increasingly warned in recent
years that the spread of such antibiotic-resistant bacteria may lead to a situation in which minor infections and routine operations could easily become lifethreatening crises.
Almost on cue, health officials in the United Kingdom announced that “a
super-resistant form” of the sexually transmissible disease gonorrhea “is cutting a swath through straight and gay communities” there (Miriam Stoppard,
“Why Outbreak of Super Gonorrhoea Is Proving Difficult to Contain,” The Mirror,
June 2, 2016).
Officials reported that this new strain can resist medicine’s most powerful
antibiotics, so it is being treated with a combination of two drugs. However, the
bacterium has developed resistance to one of the drugs, and experts say it will
inevitably develop resistance to the other—at which point no other antibiotics would be available. Public Health England described the growing crisis as a
“perfect storm scenario” (ibid.).
Such developments remind us of Jesus Christ’s prophecy of “pestilences”—
disease epidemics—that will plague the world in the time leading up to His
return (Matthew 24:7). These things “are the beginning of sorrows” (verse 8),
leading up to a time when all human life will be threatened with extinction
(verses 21-22). We need to be continually aware of where world trends are taking
us. To learn more, download or request our free study guide Are We Living in the
Time of the End? (Sources: The Washington Post, The Mirror.)
How can you make sense of the news?
So much is happening in the world, and so quickly. Where are today’s dramatic and dangerous
trends taking us? What does Bible prophecy reveal about our future? You’re probably very concerned with the direction the world is heading. So are we. That’s one reason we produce the
Beyond Today daily TV commentaries—to help you understand the news in the light of Bible
prophecy. These eye-opening presentations offer you a perspective so badly needed in our
confused world—the perspective of God’s Word. Visit us at ucg.org/beyond-today/daily !
B Tm a g a z i n e . o r g
•
July-August 2016
21
WORLD NEWS
& PROPHECY
What’s Behind the
TRANSGENDER
Movement?
With increasingly mandated acceptance of transgenderism, society is
careening off a cliff. How did we get here? And where will we end up?
by Tom Robinson
A
shocking salvo in the culture war came on May
13, 2016, when the Obama administration told
American schools to allow transgender students
to use the bathroom of whatever sex they identify
as. Though having no explicit legal force, there
was nevertheless the threat that schools refusing
to go along could lose federal funding, crippling
them financially.
Many argue that such government-backed extortion will lead
to non-transgendered deviants going into opposite-sex bathrooms
with a mere claim of gender choice that cannot be questioned. A
number of states have since filed lawsuits against the move.
This followed the passing of a North Carolina law in March
against using a restroom not matching one’s biological gender.
Obama’s administration promptly declared this a civil rights
violation, with Attorney General Loretta Lynch absurdly and
outrageously comparing bathrooms having men and women
signs to racial segregation signs of decades ago! (Meanwhile,
the media studiously overlooked the fact that public opposition to the North Carolina law was led by—you guessed it—a
convicted homosexual sex offender.)
The transgender issue got a jolt a year ago with former
Olympic athlete Bruce Jenner coming out as “Caitlyn” and
starring in a TV reality show titled I Am Cait chronicling
Jenner’s transition from a man to a woman. That seemed a
cultural turning point—and oh, how far we’ve come since!
The range of what the public will accept, known as the
window of discourse or Overton window, ever more frequently
shifts. And what was before unthinkable is suddenly part of the
public discourse and then embraced.
And here we are—only a year after the U.S. Supreme Court
struck down all laws banning homosexual marriage, when
five unelected judges forced a major cultural change on the
22
Beyond Today
•
B Tm a g a z i n e . o r g
nation—in a place most Americans and most of the world
couldn’t imagine we’d be not long ago.
What should we make of the current transgender push, and
where is it leading? And what insight does the Bible offer about
this issue and a society’s acceptance of it?
Reality check about a mental disorder
The media, news organizations and government push of the
transgender agenda is promoted as simply a matter of human
rights and fairness. But is that the real issue, or is something
considerably different at work?
Dr. Paul McHugh, longtime chief psychiatrist at Johns
Hopkins Hospital—which pioneered sex-change surgery until
experience demonstrated it wasn’t beneficial—states in a Wall
Street Journal editorial that “policy makers and the media
are doing no favors either to the public or the transgendered
by treating their confusions as a right in need of defending
rather than as a mental disorder that deserves understanding,
treatment and prevention.”
Cutting through the confusion and misinformation regarding
the issue, he describes those feeling themselves of the opposite
sex to be suffering “a mental disorder in two respects.”
The first, he explains, “is that the idea of sex misalignment is
simply mistaken—it does not correspond with physical reality.” And
the second “is that it can lead to grim psychological outcomes”—
including, but not limited to, a dangerously high suicide rate
(“Transgender Surgery Isn’t The Solution,” June 12, 2014, emphasis
added throughout).
He describes the transgendered as suffering “a disorder of
‘assumption’ like those in other disorders familiar to psychiatrists,” with the assumption being that “the individual differs
from what seems given in nature—namely one’s maleness
or femaleness.” He compares it to other kinds of disordered
“When the Binary Burns”
ThinkStock
A
assumptions such as “anorexia and bulimia nervosa, where the
assumption that departs from physical reality is the belief by
the dangerously thin that they are overweight.” And we don’t
encourage these to lose more weight or get liposuction!
One thing that’s horrible about the widespread acceptance
of transgender surgery, and advocating it for ever-younger
people, is that “when children who reported transgender
feelings were tracked without medical or surgical treatment
at both Vanderbilt University and London’s Portman Clinic,
70%-80% of them spontaneously lost those feelings.”
In other words, 70 to 80 percent of those who at a young
age inwardly believed themselves to be of the opposite sex saw
those feelings vanish as they grew older!
Further, Johns Hopkins University’s tracking of those who
underwent transgender surgery found that “their subsequent
psycho-social adjustments were no better than those who didn’t
have the surgery,” leading the institution to cease doing such
procedures.
Dr. McHugh also cites a long-term study by Sweden’s Karolinska Institute that followed 324 patients who had undergone
sex-reassignment surgery over years, sometimes decades.
“The study revealed that beginning about 10 years after having the surgery, the transgendered began to experience increasing mental difficulties. Most shockingly, their suicide mortality
rose almost 20-fold above the comparable non-transgender
population. This disturbing result has as yet no explanation
but probably reflects the growing sense of isolation reported
by the aging transgendered after surgery. The high suicide rate
certainly challenges the surgery prescription” (ibid.).
Keep in mind that this horrific suicide rate was in Sweden, where transgendered persons have long been accepted
in Swedish culture. And a recent report by the American
Foundation for Suicide Prevention stated that more than 4 in
sobering article at LifeSiteNews by Claire Chretien titled
“Bathrooms Are Just the Beginning: A Scary Look Into the
Trans Movement’s End Goals” (May 6, 2016) presents an eyeopening analysis of what’s going on.
Chretien writes: “The battle over men accessing women’s bathrooms and vice versa has little do with bathrooms or even transgenderism, a well-known LGBT activist admitted . . . It has everything to do
with re-working society and getting rid of the ‘heterobinary structure’
in which we live—eliminating distinctions between ‘male’ and ‘female’
altogether.
“Riki Wilchins, who has undergone ‘sex change’ surgery and is a
far-left social change activist, wrote in the gay publication The Advocate . . . that social conservatives and many LGBT activists are missing
the point when it comes to the transgender bathroom debate. The
title of Wilchins’ article makes the points succinctly: ‘We’ll Win the
Bathroom Battle When the Binary Burns.’
“People should be able to enter whatever bathroom ‘fits their
gender identity,’ Wilchins wrote, but the fact that we even have ‘male’
and ‘female’ bathrooms reflects something about society that needs
to change. There are many ‘genderqueer’ or ‘non-binary’ people,
Wilchins wrote . . .
“In the eyes of LGBT advocates, the notion of only two genders
. . . is antiquated. But transgenderism inherently acknowledges and
actually reinforces the binary nature of gender. Transgenderism
presumes that a man can be ‘trapped’ inside a woman’s body and
a woman can be ‘trapped’: .”
B Tm a g a z i n e . o r g
•
July-August 2016
23
10 transgender Americans had attempted suicide. Obviously
something is terribly wrong in their thinking!
Transgender advocates, so convinced of the rightness of
their cause, have persuaded several liberal American states to
pass laws banning psychiatrists, “even with parental permission, from striving to restore natural gender feelings to a transgender minor.” In other words, mental health professionals are
forbidden by law from actually helping children afflicted with
this disorder—they are only allowed to encourage them to
remain in their conflicted state of mind!
This legal climate has led misguided physicians in some states
to administer puberty-delaying hormones to gender-confused
children to make later sex-change surgeries less traumatic.
But “given that close to 80% of such children would abandon
their confusion and grow naturally into adult life if untreated,”
explains Dr. McHugh, “these medical interventions come close
to child abuse. A better way to help these children [is] with
devoted parenting.”
“At the heart of the problem,” he concludes, “is confusion
over the nature of the transgendered. ‘Sex” (ibid.).
In fact, gender is written into our very DNA. If you have
XX chromosomes, you are female. If XY, you are male. Even if
anatomy is surgically altered, the fundamental genetic identify
remains.
But this particular disorder has become a tool of those
promoting “anything goes” when it comes to sex and sexual
practices. And so it has increasingly been presented to the
public as something to embrace and celebrate.
Our Creator God, however, has a distinctly different view. In
Isaiah 5:20 He warns us, “Woe to those who call evil good, and
good evil; who put darkness for light, and light for darkness;
who put bitter for sweet, and sweet for bitter!”
In defiance of the Creator of human sexuality
Moreover, it appears that the ultimate goal of many activists
is to eliminate the concept of gender altogether (see “When the
Binary Burns” on page 23).
Of course, this goes directly against God who created
mankind. His Word, the Bible, tells us right up front: “So God
created man in His own image; in the image of God He created
him; male and female He created them . . . And God said to
them, ‘Be fruitful and multiply . . .’” (Genesis 1:27-28).
Our Creator is the One who established human beings as
being either male or female—with an important reason for
doing so being that they can reproduce and multiply those who
are made in His image (compare Genesis 5:1-3). Moreover, He
designed human family and marriage to represent an ultimate
spiritual relationship (see our free study guide Marriage and
Family: The Missing Dimension to learn more).
And God wants the male-female distinction to remain. In His
inspired Word, He rebukes the blurring of gender lines, declaring: “A woman shall not wear anything that pertains to a man,
nor shall a man put on a woman’s garment, for all who do so are
24
Beyond Today
•
B Tm a g a z i n e . o r g
an abomination to the Lord your God” (Deuteronomy 22:5).
An “abomination” is something utterly reprehensible to
God. Some have argued that mere transvestitism wouldn’t
be loathsome to God—that this must be referring to a pagan
worship practice. Yet there is no hint of this here, so it’s best
that we take this instruction exactly as it’s stated. Still, there is
a connection to paganism.
The beginning of the corruption
Idolatrous religion and many of the false philosophies we
see today can be traced back to ancient Babylon and, even
before that, to the lies of the serpent in the Garden of Eden—
Satan (Revelation 12:9).
Satan misled Eve into eating the forbidden fruit, telling her
she wouldn’t die as God said, and then Adam partook also
(verses 1-6). They then became aware of their nakedness and
sewed fig leaves to cover up (verse 7). God asked, “Who told
you that you were naked?” (verses 10-11).
What had happened? When and how did the shame of
nakedness enter? Realize that the passage is a very condensed
account and much more is going on here than meets the eye.
Considering the whole of pagan religion that began with
Satan’s lies, his statement that Eve wouldn’t die and would be
like God, and his being behind the shame of nakedness here,
we get some indication of what was going on here and the
implications.
It seems likely that here began the false concept of the
dual nature of man as spiritual immortal soul imprisoned
in corrupt material flesh (see our free study guide What Happens After Death? to better understand the truth about this).
Satan wanted Adam and Eve to believe that their existence
transcended their physical bodies, and that their bodies—
especially the sexual organs of physical procreation—were in
fact evil and something they should be ashamed of.
Satan, a former archangel in rebellion against God, hated
these human beings God created to expand the divine family.
He wanted to corrupt and destroy them. Moreover, as a spirit
being Satan had no sexuality and no means of reproducing himself and may have been envious of these attributes
bestowed on mankind.
This may be part of why Satan set about to discredit and
corrupt sexuality. Yet he also may have been aiming to hinder
human reproduction, with the desire of eradicating mankind.
Later he would promote sexuality as an object of lust—and
sexual fertility as pagan magical power. The corruption and
destruction of the human race remains his goal.
The doctrine of the immortality of the soul and the baseness
of the flesh was later part of the false religion of Babylon, the
fountainhead of all idolatry. A powerful strain of this ideology,
which developed into various forms, was gnosticism, which
sought to transcend earthly reality and the corrupt physical
existence through secret knowledge. The biblical writers Paul,
John and Jude confronted those who were infiltrating the
Church with such ideas.
Pagan dualism and gnosticism led to two extremes promoted by Satan—celibacy to the point of avoiding marriage on
one hand and lustful abandon on the other. While some sought
to deny the flesh through ascetism (Colossians 2:21), others
indulged the flesh in every way, arguing that their real selves
were spiritual and transcending the flesh, so fleshly debasement did not matter. It became “a license for immorality”
(Jude 4, New International Version).
Babylonian worship even included gender bending as part
of transcending fleshly limitations (see “Gender Blurring in
Pagan Worship” at right).
Where to now?
Think about all this in the context of the culture war being
waged in the present time. Satan still has his unwitting worshippers today—those who would follow him in his quest to debase
and destroy mankind through the elimination of heterosexual
marriage, family roles and the concept of gender altogether.
The lie in Eden and Babylon is still with us today—in
merely another form. There are even those who deny their
fleshly reality to be something else. And there are others who
want to live entirely in an unreal world of gender fluidity or
as neither male nor female.
Our society tells people to be true to themselves and
everything will be okay. But true compassion for individuals
who feel like their body is the wrong sex is not to alter every
cultural norm and offer them harmful and permanent body
mutilations to affirm their psychological condition. True
compassion would be to help them overcome their disorder.
Healing and relief are what they need most. Our Creator
God understands that we each have struggles, human failings, and hardships to overcome. We are born into a world
that does not reflect His compassionate love, but instead
Satan’s twisted and destructive hatred. God sent His Son to
die for us while we were still sinners (Romans 5:8), and His
promise of help for those who accept His calling extends to
everybody.
Those who follow God are to be lights to those in need of
that healing force. Our response to those who are transgender
should be an extended hand of compassion and a heartfelt
prayer for healing. Jesus explained to the self-righteous
Pharisees, “It is not the healthy who need a doctor, but the
sick” (Luke 5:31, New International Version 1984).
If you’ve supported this lie, God calls on you to repent.
And He forgives. If you are one who actually struggles with
gender identity, seek Him with your whole heart for the healing of your mind—and also seek out professional help from
counselors who perceive this problem correctly.
The number of transgenders is relatively tiny, estimated
at somewhere between 1.4 to 7.7 per 100,000 according to a
U.S. Census Bureau study. But the number of those embracing and condoning this problem is increasing by leaps and
bounds, as the culture pushes headlong away from God.
Where will the window of what’s acceptable shift to next?
Things still unspeakable today may be the headlines of years
soon ahead. Riki Wilchins, the transsexual activist quoted
in the sidebar “When the Binary Burns,” sees the work of
transgender advocates as important to the goal of eradicating
the male-female structure of society, noting that they have
“finally and perhaps unwittingly opened the gender Pandora’s Box, and over the next few years all sorts of unexpected
non-binary things . . . are about to come popping out.”
May God through Jesus Christ “deliver us from this present
evil age” (Galatians 1:4)—soon!
Gender Blurring in Pagan Worship
C of Heaven.”
Ishtar was the prototype for the Greek Aphrodite and Roman
Venus and a host of other deities—and was identified with the
planet Venus as the morning and evening star. The Bible equates
the worship of pagan deities with the worship of demons (Deuteronomy 32:16-17; Psalm 106:35-38; 1 Corinthians 10:20). And it’s
intriguing to see that Scripture refers to Satan in his rebellion as
Lucifer, the morning star (Isaiah 14:12). The devil is the real personage behind the false deity.
The Zondervan Illustrated Bible Backgrounds Commentary mentions the androgyny or gender ambiguity of Ishtar in its note on
Deuteronomy 22:5 (2009, Vol. 1, p. 493), citing an enlightening
source we now turn to—Gender and Aging in Mesopotamia: The
Gilgamesh Epic and Other Ancient Literature by Rivkah Harris (2000).
Ishtar, Harris explains, “is androgynous, marginal, ambiguous . . .
She is betwixt and between . . . Central to the goddess as paradox
is her well-attested psychological and physiological androgyny.
Inanna-Ishtar is both female and male . . . [in one place stating]
‘Though I am a woman I am a noble young man’” (pp. 160, 163).
She shattered all gender and socioeconomic distinctions—
being both a royal queen and “the harlot of heaven . . . set out for
the alehouse” (p. 166). And in all this she was the role model for
her followers. Among her powers was this from a Sumerian poem:
“To turn a man into a woman and a woman into a man are yours,
Inanna” (p. 160).
In the Descent of Ishtar we are told of some participants in her
cult: “The male prostitutes comb their hair before her . . . They
decorate the napes of their necks with colored bands . . . They
gird themselves with the sword belt . . . Their right side they adorn
with women’s clothing . . . Their left side they cover with men’s
clothing . . .” (p. 170). The revel and competition ended in a bloody
spectacle of self-cutting (compare 1 Kings 18:28).
Harris states: “Their transvestitism simulated the androgyny
of Inanna-Ishtar. It was perhaps the inversion of the male/female
binary opposition that thereby neutralized this opposition. By
emulating their goddess who was both female and male, they
shattered the boundary between the sexes” (pp. 170-171). This was
seen as a way of rising above the prison of the flesh.
LEARN MORE
Sadly, the United States of America has lost its
way—terribly. Its highest court has banned
God and prayer from its classrooms, while
those same schools teach students that they’re
the product of mindless evolution. Request or
download our free study guide The United States
and Britain in Bible Prophecy to understand what’s
going on in the light of Bible prophecy!
BTmagazine.org/booklets
B Tm a g a z i n e . o r g
•
July-August 2016
25
WORLD NEWS
& PROPHECY
M
y father often told the story of being
introduced to someone from Australia.
After talking for a few minutes, and mistaking his new friend’s accent for a British
one, my dad asked, “Are you from Great
Britain?” With typical Australian humor, the man replied,
“Great Britain ain’t been great for years, mate!”
How true that is for those who have watched as Great
Britain—formerly ruling the greatest empire the world had
ever seen, on which the sun never set—declined in power,
influence and internal stability in the 20th century and
into the 21st.
A number of factors have contributed, including the toll
of world wars, the welfare state, multiculturalism, integration into a hyper-regulated Europe and, above all, a slide
into ever-worsening immorality. Now the nation faces an
uncertain future following the vote on “Brexit,” British
exit from the European Union, in late June—the outcome
unknown as of this writing. Whichever way the vote has
gone when you read this, the nation no doubt remains
quite divided on the matter—and on many other issues.
How did Great Britain become great in the first place?
How did things change? And what should the individual
citizen do?
What Made
BRITAIN
GREAT?
And How Did It All Go Wrong?
by Milan Bizic
“Such is the end of Empire”
At its peak in the years leading up to World War I, the
British Empire was truly a world-dominating power, with
territories, colonies and dependencies on all six of the
earth’s inhabited continents. In terms of military might,
it was unchallenged by any one single opponent until the
tremendous changes of the World War I era.
Two world wars later, the combined weight of the global
conflicts drained the nation’s economy as well as the
willpower of its people. The next step was the dissolution
of Britain’s world-spanning empire over the half-century
following World War II.
First the empire began its exit from Asia by granting
India independence and pulling out of the Middle East
in the deal that would lead to the formation of the state of
Israel in 1948. Next came Britain’s departure from Africa
in the 1960s, led most notably by the independence of
South Africa.
Britain’s exit from colonialism and the final breath of
its empire came on July 1, 1997, when the United Kingdom
officially handed Hong Kong back to China. After the
ceremony, on his return trip to England, Prince Charles
wrote in his journal a succinct summary of what the
handover meant: “Such is the end of Empire, I sighed to
myself” (“Charles’ Diary Lays Thoughts Bare,” BBC News,
Feb. 22, 2006).
Much has been written on the enduring impact of
Britain’s rise to greatness and its colonial legacy, both in
admiration and in criticism. Human nature being what it
is, her rule was at times marred by personal and national
ambition and greed. Yet the British molded much of what
we see in the world today in terms of national boundaries, the global balance of power and an enduring cultural
legacy.
26
Beyond Today
•
B Tm a g a z i n e . o r g
At one point the largest and most
powerful empire known to man,
the British Empire, ruled lands on all
inhabited continents of the earth.
Now Britain vies with other powers
for world influence. What happened?
Where do things stand? And what
must be done?
Political author Fareed Zakaria commented on Britain’s powerful cultural
influence: “In fact, Britain has arguably
been the most successful exporter of its
culture in human history.” (“The Future
of American Power,” Foreign Affairs,
May-June 2008, p. 20).
people are honest? Or what nation has
ever been strengthened by murder?
How much more powerful the
impact of the Bible’s standards can be
when it begins to take root in a society
whose value system included polygamy,
intertribal warfare or cannibalism. This
was the depth of the impact the British
Empire and its Bible-carrying citizens
had on many of its colonies. In this
way the British people, with their faith
in God and conviction that the Bible
was His very Word, were a blessing to
millions around the world.
God and His Word the source
of greatness
A cultural shift away from God’s laws
Not mentioned by Zakaria is the
single most important contribution the
British Empire made to the far-flung
regions of its dominion—the Englishlanguage King James Version of the
Holy Bible.
In the year 1611, in the early stages
of the British Empire’s first great period
of expansion and prosperity, the new
translation of the Bible ordered by
King James seven years earlier was
How did such monumental national
decline take place in such a short
amount of time? What were the chief
factors in the changes to British society
and culture?
As we covered before, a nation that
reads, believes and obeys the Word of
God is blessed by virtue of the Bible’s
instructions and the natural positive
effects of God’s way. The other side of
the equation is also true: A nation that
rebels against God’s way, especially after
ucg.org/learnmore
Discover
more online
You’ll find much more
great biblical material
on our website!
Beyond Today
television
ThinkStock
“Such is the end of Empire, I sighed
to myself.”—Prince Charles, 1997
completed. With that, the Word of
God became more widely available and
understandable to more people than
ever before.
Its widespread availability wasn’t
confined to the British Isles, either. As
the British Empire spread its arms wide
and began to encompass more and more
of the world, missionaries followed,
taking the Bible’s message to millions
of people in lands it had never reached
before.
In this way, the British people
delivered untold blessings to people
around the world—blessings that stem
from reading, believing and following
the Word of God, a spiritual reality
affirmed by Jesus Christ (Luke 11:28).
And the Bible’s dissemination has
made a big difference in the world (compare Isaiah 55:10-11). Basic Christian
morals are a blessing to any society,
especially as more and more citizens
learn to faithfully abide by them and
practice them. After all, who doesn’t
think a society is better served when its
having tasted the blessings that come
from following Him, will soon find
itself experiencing what the Bible calls
“curses”—the opposite of blessings.
Deuteronomy 28 gives a list of
blessings from obeying God as well as
a chilling list of the curses that would
come on a nation that rejected His
authority. These curses aren’t magic or
mysticism. They’re simply the logical
outcomes of a society embracing sin.
The rapid decline and disassembling
of the British Empire should come as no
surprise if viewed as the natural next
step in a nation that no longer views
God’s Word as its moral compass.
We should consider that Britain’s
highly socialized welfare system—
which, as in many other countries, has
the appearance of a godly system of
helping others—is, in vital respects,
actually contrary to God’s law. While
the Bible does advocate helping the
poor (and spells out how to do so), the
government has no right to confiscate
income from some to redistribute that
Bible
study guides
Video Bible
studies
plus video sermons,
our 12-part Bible Study
Course and more!
B Tm a g a z i n e . o r g
•
July-August 2016
27
to others. This is a form of theft, a violation of the Eighth
Commandment.
Socialist programs end up creating government dependency
rather than promote industry, a high work ethic and charitable
concern for others. This is not to say that people in need should
not be helped economically, but the fruit of the social welfare
state over generations shows that this is not the right or truly
beneficial way to go about it.
Further burdening the welfare system today is mass immigration. The Daily Express reported: “The burden of public
services, benefits and pensions for migrants and their families
far outstrips the income from what they pay in taxes. Migrants
contributed £89.7 billion in taxes but received £106.7 billion in
public spending during 2014-15” (Macer Hall, “Migrants Cost
Britain £17bn a Year,” May 17, 2016). The same report says that
leaving the EU would cut this bill by £1.2 billion, but that is
clearly not enough.
has led to homegrown terrorism through bombings and even
beheadings on the street.
Perhaps of chief concern in Islam’s rise in the UK is the
appearance of Sharia (Islamic law) “courts” in fundamentalist
Muslim communities. While these institutions are not legal
courts, per se, their existence has many in Britain concerned
about the potential creation of an alternate legal system in the
country.
“‘Over the years, we have witnessed with increasing alarm
the influence of ‘Sharia courts’ over the lives of citizens of
Muslim heritage,’ nearly 200 women’s rights and secular
campaigners said in a statement. ‘Though” (Emma Batha,
“Britain Must Ban Sharia ‘Kangaroo Courts,’ Say Activists,”
Abandonment of Christian values
Reuters, July 15, 2015).
Yet despite the growing influence of right-wing Islam, the
Beyond this, other laws of God are also violated more
country’s moral underpinnings are tied most strongly to the
and more at every level of society. To claim that the United
secular humanist movement. This was displayed prominently
Kingdom has suffered from a steep and drastic decline in the
fundamental standards of Christian morality should not be at recently when London’s new mayor Sadiq Khan, himself
all surprising or shocking. Even observers a world away should raised Muslim, flew the rainbow flag from city hall to honor
the International Day Against Homophobia and Transphobia
see the signs of a nation that has lost its spiritual way.
(Donna Edmunds, “Muslim Mayor of London Flies Rainbow
Several years ago, Richard Chartres, the Anglican bishop
Flag From City Hall,” Breitbart, May 17, 2016).
of London, addressed the moral failings of the British people
in regard to the decay of biblical family values. “‘Literally
millions of children grow up without knowing a stable, loving, Back to the Bible
Britain is no different from other Western nations in forgetsecure family life—and that is not to count the hundreds of
thousands more who don’t even make it out of the womb each ting the source of their incredible blessings and their national
power—the Almighty Creator God and His Word, the Bible.
year,’ he said, in a pointed reference to abortion.
While the future of nations and the balance of world power is
“‘Promiscuity, separation and divorce have reached epidemic proportions in our society. Perhaps, then, we shouldn’t solely the purview of God, He does allow us as individuals the
opportunity to know Him and follow Him. How do we know
be surprised that depression and the prescription of antiGod and follow His will? The answer to personal success is the
depressants has reached a similarly epidemic level’” (quoted
same as the answer to national success—God’s holy Word.
by Jerome Taylort, “Bishop Says Moral Decline Has Hit
It’s difficult to see our nations, our neighbors and, in some
‘Epidemic’ Levels,” The Independent, June 1, 2012).
cases, even our families forget God and walk their own way.
As Christian morals and beliefs are increasingly passé,
But Christians must follow God’s lead, not society’s. Commit
and fewer and fewer Brits identify as Christians (now down
yourself to reading His Word each day. It is a wellspring that
to about 42 percent), an increasing number of people, nearly
half the British population, consider themselves to be entirely truly brings blessings: “Blessed are those whose way is blameless, who walk in the law of the Lord! Blessed are those who
nonreligious (John Bingham, “Christians Now a Minority in
keep his testimonies, who seek him with their whole heart,
UK as Half the Population Have No Religion,” The Telegraph,
who also do no wrong, but walk in his ways!” (Psalm 119:1-2,
Sept. 10, 2013).
English Standard Version).
In the religious void left behind in Christianity’s decline,
That’s a lesson all the British—and all people everywhere—
other religions, notably Islam, are growing. One factor in this
is the mass immigration already noted, as this includes a huge desperately need to learn.
influx of people from Muslim nations. And multiculturalism,
LEARN MORE
wherein immigrants maintain their own culture rather than
blending in to the general culture, is heavily promoted in
Britain has a remarkable story. It rose from a
Britain, causing societal fragmentation.
tiny nation to become the greatest empire
One result of this is many pockets throughout the country
the world has known. But now it’s a shadow
of Islamic fundamentalism and even extremism, wherein the
of its former greatness. To learn the remarkWest is viewed as an enemy, with these having high birthable story as revealed in Bible prophecy,
rates—against the reduced birthrates of the native population.
download or request our free study guide
This was one of the curses God warned of: “The alien who is
The United States and Britain in Bible Prophecy.
among you shall rise higher and higher and you shall come
BTmagazine.org/booklets
down lower and lower” (Deuteronomy 28:43). In Britain this
28
Beyond Today
•
B Tm a g a z i n e . o r g
LETTERS FROM
OUR READERS
Thankful for Beyond Today magazine
until I saw your program. I was amazed and excited to hear your message and to realize that I have been deceived about so much all my life.
My husband and I have been following you, studying with you and
Your programs have been such a blessing, and I am sharing as much
checking all your teachings out in the Word of God. We have started a
as possible with my mom. She reads the literature I receive from you.
small Bible study group where we teach what we are learning to others.
I am so grateful for your ministry and thank you for giving me the truth
What a shock after being lied to for all our lives and our family’s lives! We
and informing me of how the current events and activities around the
meet every Sabbath.
world are right out of prophecy. May God continue to bless you and be
From the Internet
with us until Christ returns.
Viewer in Baltimore, Maryland
I am a longtime avid reader of the Beyond Today magazine and
booklets. It is one of the best publications in this ever-confusing and
Looking for a congregation of worshipers
troubling world. Thank you so very much.
Your magazine really inspires my husband and me, and we are trying
Reader in Simbu, Papua New Guinea
to tell everyone about the magazine. I no longer celebrate any of the
I would like to thank the magazine for helping me understand more pagan holidays that were programmed into me and which I always
loved. For years we just carried on with those traditions, not paying
about the Word of God and about the real truth of God’s Word. It’s
attention to what the Bible really says by reading it for ourselves. I
good to know that there are special people that God uses to help others. I really thank God for the United Church of God. I watch your Beyond would appreciate you letting us know where we can go to church near
us. I am not perfect, but I keep learning every day that the practices
Today program, and it’s helping me so very much.
I
Subscriber in Alabama thought were so important really mean nothing of value in the end.
Subscriber in Kentucky
My husband and I want to thank you so very much for our first issue
We are reading through all your study guides, listening to your
of Beyond Today! We are truly enjoying it! I was reading in it today and
sermons
and sermonettes and watching your service online. We too
you wouldn’t believe how much it helped me! I also like the way it
believe
that
we should obey all God’s commandments and teachings.
explains things because I need help in knowing exactly what Scripture
But most churches do not obey them all, observing holidays that are
means.
Subscriber in North Carolina pagan instead of the Holy Days that God commanded us to keep. We
have a small gathering on the Sabbath, and we have been meeting for
Beyond Today magazine is the most brilliant evangelistic magazine
two years now. We would very much like to be a part of your gathering.
I have read for a long time. The articles are short and to the point, and
Thank you for hearing us and helping us.
cover the most important topics.
Readers in Maine
Reader in Australia
We invite readers who are in areas where we have no congregation to
Thankful for publications
watch our weekly Sabbath services webcast at ucg.org/webcast. You may
Thank you for providing such enlightening and comprehensive litera- also wish to watch our midweek Bible study at ucg.org/beyond-today/
ture. I was raised without religion and, now in my 50s, I feel the void.
webcast. To find the location of our congregation nearest you, go to
I am reading the Bible for the first time, and your literature is wonderful ucg.org/congregations.
in helping to read the message and the meaning, not just the words.
I am a Pakistani Christian keeping all of the Almighty’s commandI am receiving a much richer understanding with your help. Thank you
ments—the Ten Commandments, high Sabbaths, avoiding unclean
so much.
Reader in Maryland meats, etc.—for the last two years. In Pakistan, sadly I have not come
across any church or even a single soul that observes what the
I’m one of those people who have read a study guide from your
Almighty’s commands. So unfortunately I have no church to attend, and
association. I would just like to say thank you for your efforts and
was wondering if you have any churches in Karachi, Pakistan. I pray that
determination on writing your publications. Thank you for letting God the Almighty would broaden your ministry to reach the lost sheep of
use you as a channel of blessings. Your study guides really help a lot
Christ all over the world. I am keen to make a donation, but have no
in the development of a Christian’s understanding about the Word of
means to send you a gift from my country.
God. God bless you all. Please continue doing good deeds for God!
Reader in Pakistan
Reader in Quezon City, Philippines
You can find the closest local congregation at ucg.org/congregations. As
Enjoys Beyond Today television
for a church in Pakistan, sadly we do not have one, but are thrilled that the
I love Beyond Today. It’s just a straightforward show that reminds us of gospel of Jesus Christ is reaching people all over the world and changing lives.
the importance of constant contact with God through discipline.
Viewer in Australia
Published letters may be edited for clarity and space. Address your
letters to Beyond Today, P.O. Box 541027, Cincinnati, OH 45254-1027,
Thank you very much for your ministry. When I moved in with my
mother to care for her, I stopped going to church and began looking for U.S.A., or e-mail BTinfo@ucg.org (please be sure to include your full
Christian services on TV to substitute. I watched some and felt satisfied name, city, state or province, and country).
B Tm a g a z i n e . o r g
•
July-August 2016
29
THE BIBLE
AND YOU
The Cup and the Dish
Washing dishes either by hand or with a dishwasher is a routine chore. Even Jesus Christ
used washing dishes as a metaphor when speaking to religious teachers of His day
about their need for spiritual cleanliness. Discover how His vital words apply to us all.
A
by John LaBissoniere
30
Beyond Today
•
B Tm a g a z i n e . o r g.
Josephine’s work and resolve institu-
Photos: Thinkstock, Wikimedia
lthough.
To examine the prospect of creating a dishwashing
machine, Josephine began sketching designs on paper.!
Josephine Cochrane
First, the bad news
Before describing that good news, let’s
Jesus used the metaphor of cleaning dishes:
“First wash the inside of the cup and the dish,
and then the outside will become clean, too.”
tions serving many meals daily.
Selling her unique product was Josephine’s next challenge. In describing her
feelings after making her first “cold-call”
sales appointment at the elegant Sherman
House hotel in Chicago, (Mark 7:3-4). Jesus rebuked them
further by stating: “You blind Pharisee!
First wash the inside of the cup and the
dish, and then the outside will become
clean, too” (Matthew 23:26,, New
International Version).
Also, Jeremiah wrote, “The heart
is deceitful above all things, and
desperately wicked; who can know it?”
(Jeremiah 17:9). Furthermore Isaiah said:
“We are all infected and impure with sin.
When we display our righteous deeds,
they are nothing but filthy rags” (Isaiah
64:6, NLT).
As hard as it may be to accept, these
passages pertain to what we are really
like (1 John 1:8). Indeed, all of us have
broken God’s laws, which define what
sin is (Romans 3:23; 1 John 3:4).
But even now, when we may think we
have heard the worst, there is more to
state. What is this last bit of bad news?
It’s that our sins have cut us off from our
Creator (Isaiah 59:2).).
But before discussing that, we need to
return briefly to the image of the dirty
kitchen. What should be done to clean it
up? To begin, the fouled dishes and pans
should be immersed in hot water and
B Tm a g a z i n e . o r g
•
July-August 2016
31
THE BIBLE
AND YOU
Following repentance, baptism and receiving the
Holy Spirit, we need to make spiritual progress so
we can become increasingly like Jesus Christ. We
do this by immersing ourselves in regular study
and diligent application of God’s Word.
God’s Word cleanses and purifies
Learning and applying God’s Word requires diligent
personal effort. Unlike a dishwasher, we can’t simply push a
32
Beyond Today
•
B Tm a g a z i n e . o r g
button and put our spiritual lives on an “automatic” setting.
Rather, we must employ God’s mighty help to overthrow
sinful habits while “bringing into captivity every thought
to the obedience of Christ” (2 Corinthians 10:5). (2 Corinthians 13:5). We need to be
highly sensitive to the dirt of sin by allowing the Bible’s
words to purify our minds and hearts (Romans 12:2).?
LEARN MORE
How can we clean up our lives and become
the kind of spiritually vibrant people God
wants us to be? Request or download
your free copy of our study guide Tools for
Spiritual Growth to help you discover the
answers in the pages of your Bible!
BTmagazine.org/booklets
Thinkstock).
We need to admit we have sinned and deeply repent of
those transgressions (1 John 1:9). We must accept the suffering and death of Jesus Christ as full payment for our sins
(Colossians 1:22).
Following this we need to take another crucial action. We
must be baptized. Through baptism we make the commitment to obey God’s commandments and surrender our
lives to His service (Romans 6:13; Matthew 19:17). Baptism
symbolizes the washing away of all our former sins so we
can move forward with a clean conscience (Acts 22:16).
After baptism we need to receive God’s gift of His Holy
Spirit through the laying on of hands by His ministry
(Hebrews 6:2). Obtaining God’s Spirit is of key importance
so that we can fulfill the promise we made at baptism to keep
God’s laws (Titus 3:5-6). Through the Holy Spirit, we are
empowered to build a profound, personal relationship with
God and to truly love and care for other people (Romans 5:5;
1 Peter 1:22).
Following repentance, baptism and receiving the Holy
Spirit, we need to make spiritual progress so we can become
increasingly like Jesus Christ (John 13:15). We do this by
immersing ourselves in regular study and diligent application
of God’s Word (2 Timothy 2:15).
The Bible’s insights provide us the continual divine
cleansing we need (John 15:3; Ephesians 5:26-27). When we
spend time each day studying the Scriptures and practicing
their admonitions, we will be better able to resist temptation
that can lead to sin (Matthew 26:41).
Visiting Widows and
Widowers in Their Affliction
What can we do to help someone who has lost a loved one?
Here are some practical and biblical solutions.
by Janet Treadway
R
ecently a neighbor, Ann, was found dead in her condo
above us. Her husband had died six months earlier. I knew
that she had taken it very hard, but she seemed so strong
and was going on with her life.
I would see her in passing when she was out walking her dog.
I remember her mentioning to me that she hoped she did not
disturb me while she was crying so hard the night before. After
she was found dead, I was laden with guilt. I had been too busy
to take the time to chat more with her and find out how she was
really doing. Now she was gone. Could a little of my time with
her have prevented her death?
Ann was not very old and seemed to me to be a picture of
good health, so her passing took me by surprise. Her daughter
lived about four hours away, so it was hard for her to check
in on her. Ann seemed to be busy working at what she loved
doing, which was singing. In fact, that was how she met her
husband—they were both in a jazz singing group here in our
city. She was scheduled to sing on the day she was found dead.
Sadly, even her singing didn’t take away the hidden pain she was
going through.
Losing a spouse can be devastating. A study has found that
when a husband or wife dies, the remaining spouse’s risk of
dying is 66 percent higher in the three months afterward. Grief
can even affect the immune system. There can even be what’s
called “Sudden Adult Death Syndrome,” a cardiac condition that
can be triggered by emotional stress.
How can we help those who’ve lost the love of their life,
their life’s partner, their spouse? People may seem okay on the
outside, just as Ann did to me for the most part, but on the
inside grief has overtaken and overwhelmed them.
Here are some positive steps you can take to help:
Reach out! Call, text or e-mail them often—even if they
don’t respond. Let them know you are there for them. This is
especially important after the funeral when other people return
to their normal routines and aren’t there to provide support.
Stay involved!
Include them in your family’s activities. As time permits,
make sure they are not spending their days alone.
Listen, and don’t try to “fix” their feelings. They will need
a listening ear so they can vent their emotions. Some of the
most common feelings and concerns after the loss of a spouse
are reflected in statements like these:
•
•
•
“I’ve lost my best friend.”
“I’m angry.”
“I feel guilty I didn’t do enough for him [or her].”
“I’m afraid.”
“I worry about lots of things, especially money.”
“I suddenly feel very old.”
“I feel sick all the time.”
“I think about my own death more.”
“I seem to be going through an identity crisis.”
“I feel relieved that his [or her] suffering is over, then
immediately guilty for feeling that way.”
Widows and widowers need your time so they can express
all of these confusing emotions. Through this they can begin to
heal and move forward. Studies clearly show that mortality rates
are higher among those who don’t articulate their grief; this is
especially true with men. Always ask God to help you say and do
the right thing to help them.
Offer to help with paperwork, housework and grocery shopping. This is especially true for a widower whose wife might
have done all those things. On top of the death, these day-today chores can be overwhelming. Bring frozen meals. Just make
yourself available and offer support, stating directly, “Please tell
me what I can do for you.”
The Bible says much about caring for widows, but the need is
also there for widowers. New research has found that a grieving
husband is more likely to die shortly after losing his wife, while a
widowed woman is more able to carry on with life.
Professor Javier Espinosa, who led a study at the Rochester
Institute of Technology, said: “When a wife dies, men are often
unprepared. They have often lost their caregiver, someone who
cares for them physically and emotionally, and the loss directly
impacts the husband’s health. This same mechanism is likely
weaker for most women when a husband dies” (quoted in The
Telegraph, Oct. 22, 2012).
If you are going through a loss or know someone who is,
remember Psalm 34:18: “The Lord is close to the brokenhearted and
saves those who are crushed in spirit” (New International Version).
Always ask God to encourage those who are going through loss.
Never take for granted that people are doing okay by the
brave face they put on, as I did with Ann. Be involved, take their
hand, and encourage them through this. Don’t forget widows
and widowers in their time of need!
•
B Tm a g a z i n e . o r g
•
July-August 2016
33
Follow Me...
THE BIBLE
AND YOU
A Promise
Is a Promise!
T
We can trust in God and Christ to lead and help us on the path of life—through the Holy Spirit.
by Robin Webber
encourage those sincerely following Him: “And I will pray
the Father, and He will give you another Helper . . . the
Spirit of Truth . . . I will not leave you orphans: I will come
to you (verses 16-18).
But a condition for us is that we must be willing to let go
of our self-made bundle of security straws.
Humanly, therein lies the challenge. Any reading of
Scripture informs us about Jesus Christ’s invitation to
those seeking after Him. He plainly stated in the Beatitudes (Matthew 5:1-12) that we could be incredibly happy
without the trappings of happiness. We can be fearless
even when our knees are shaking and we live in constant
trouble in a world that relies on wind and straw rather
than eternal truths.
What do I mean? Let’s consider for a moment:
In one day, the disciples went from experiencing a
vision of Christ transfigured in glory to not being able to
battle the demons down here below (Matthew 17:1-21).
Within 24 hours, Christ went from precious moments
of washing others’ feet to having His hands nailed to a
wooden beam.
“I will come to you”
The joy of Pentecost that year was short-lived as apostles
What does this mean for us? In extending His invitation were jailed and Stephen, one of the first deacons in the
of “Follow Me,” Jesus Christ boldly proclaimed, “I am the
Church, was martyred.
Way” (John 14:6, emphasis added throughout). And He is
The road is not always smooth, but we must keep in
the singular Way at that!
mind the destination and realize that it’s worth every step
In so doing, He made specific promises to inform and
to get there.
34
Beyond Today
•
B Tm a g a z i n e . o r g
Thinkstock
here’s a story about an old Scottish woman
who traveled around the countryside selling
housewares. Whenever she came to a fork in the
road, she would throw a straw in the air, and
when it dropped to the ground she would proceed
in the direction it indicated.
The residents of the area knew her strange custom, but
one day a friend saw her tossing the straw several times
before choosing a path. He inquired, “Why do you do
that more than once?” “Oh,” she replied, “it kept pointing
to the road on the left, and I wanted to go the other way
because it appeared smoother.” She proceeded to toss the
straw to the wind until she got the direction she wanted.
Let’s all recognize that Scots and women are not alone
in exercising such techniques. We all too often do our
own mental form of tossing straw to guide our decisions
with matters far more serious than selling housewares.
Yes, even Christians get stuck when confronted with
our personal forks in the road and humanly seek that
smoother path.
Exactly what did Christ promise would come? Are His
promises really better than our human premises of straw?
Promises you can trust
They came to fully grasp and understand that their Master
and
Lord, the Christ, and His Father would fulfill what was
Multiple promises given by Christ
promised and would come to dwell in them by the Holy
To people of Christ’s day, the Spirit of God was perhaps
Spirit—it being the Spirit of the Father and Christ, both
considered by some as awesome yet impersonal and involved
of whom are Holy and Spirit (Leviticus 11:44; 1 Peter 1:16;
in the lives of select people for certain periods of time.
John 4:24).
Yet Christ, the Word and voice of God on earth, declared
The apostle Paul makes this incredible truth extremely plain
incredible aspects of a new relationship with the many and not in Romans 8:9: “But you are not in the flesh but in the Spirit,
just the few.
if indeed the Spirit of God dwells in you. Now if anyone does
Jesus stated:
not have the Spirit of Christ, he is not His.” It is the same Spirit,
• The Spirit would be with us forever (John 14:16) and not
the one Spirit that makes us all one (Ephesians 4:4; Hebrews
merely temporarily.
2:10-11).
• It would be given to those whom the Father called and
The Holy Spirit is not simply a spiritual bulldozer, hamwho responded to that selection and not given to the world at
mer, or screwdriver to reach for, but is literally the indwelling
large at this time (John 14:17).
essence of God the Father and Christ in us through which we
• It would not come and go as before, but would—notice— can live a daily life of worship, glorify Them in all we do, and
live with us and even in us (John 14:17).
be a blessing to others.
• While not a person as many claim (see our free study
As the apostle Paul came to confess and understand: “I am
guide Is God a Trinity?) the Spirit would have qualities that
crucified with Christ: nevertheless I live; yet not I, but Christ
literally could teach us, guide us and remind us of Christ’s
liveth in me: and the life which I now live in the flesh I live by
words (John 14:26).
the faith of the Son of God, who loved me, and gave himself for
• It would convince (convict) us of sin, show us God’s
me” (Galatians 2:20, King James Version).
righteousness, and even declare God’s judgment regarding evil
There truly is a difference between walking with Christ
(John 16:8).
and allowing Him to walk in us and draw on His Spirit. Just
• It would even guide us and grant us insight towards future ask the disciples who had the experience of walking with
happenings—and all to give due glory to the One sent by our
Jesus Christ during His ministry, followed by His death and
Heavenly Father (John 16:13-14).
resurrection before that eventful Pentecost. Talk about a
before-and-after snapshot comparison of
hearts and minds!
It’s the same faithful Spirit that enabled
Jesus in the flesh to seek God’s will and not
His own in every thought, word and deed
even when smoother roads lay in front of
Him. Christ never twirled a piece of straw
hoping for a different path other than the
With this all promised, when Pentecost came that year
one the Father set before Him. He believed God and did what
the disciples were praying and obedient but, perhaps equally
He said.
important, they were expectant and believed Christ’s promises
When we truly grasp and believe Jesus Christ meant what
(Acts 1:12-14).
He said and keeps His promises, it doesn’t mean that life
But why? They had spent much time with the Son of God
necessarily becomes humanly easier. But it becomes eternally
who was filled with the Spirit of God—Jesus of Nazareth. They rewarding, as we come to the forks in the roads of our life’s
had witnessed that the Word was made flesh and dwelt among journey and toss our self-made straw aside.
them and that the Spirit descended from heaven and rested on
For we have chosen to exist beyond the moment, and we
Him (John 1:14, 32).
thank our Heavenly Father every day that we not only believe
They had real-life experience with this One who was full of
in His Son, but know that He exists within us and within all
the Spirit (Luke 4:1), who avoided sin in the wilderness, where those who have heeded the call of “Follow Me.”
He exercised the wisdom of God in remembering and declarNow it’s time to walk in the Spirit by faith as He did!
ing Scripture to avoid evil and remain righteous and give God
glory in stating, “For it is written, you shall worship the Lord
LEARN MORE
your God, and Him only you shall serve” (Matthew 4:10).
It was He who was the embodiment of the messianic
How does the Holy Spirit work in the lives
prophecy that “the Spirit of the Lord shall rest upon Him, the
of believers? What does it do for them, and
Spirit of wisdom and understanding, the Spirit of counsel and
how? Learn what the Bible really says in our
might, the Spirit of knowledge and of the fear of the Lord”
free study guide Transforming Your Life: The
Process of Conversion. A free copy is waiting
(Isaiah 11:2).
for you!
Again, they did not know what was to come, but they knew
BTmagazine.org/booklets
they had to use the promised Spirit fully—to rely on it and be
guided by God through it in all of life’s challenges.
Exactly what did Christ promise would
come? Are His promises really better than
our human premises of straw?
B Tm a g a z i n e . o r g
•
July-August 2016
35
Bible Prophecy and You
MINI-STUDY
SERIES
How You Can Correctly Understand
GOD’S PROPHECIES
elcome to the fourth study! In this study we'll look at some valuable keys
for correctly understanding Bible prophecy.
A person in the state of Washington who now
embraces the Bible in its entirety tells this story:
“As a child, I faithfully attended church services every
Sunday with my parents and siblings. We were taught
many of God’s truths, and I tried to obey them and my
parents.
“When I was 15 years old, I developed rheumatic fever,
and it required many weeks of bed rest and medication to
regain my health. During this time, I read many books,
one of which was the Bible. One day, as I was reading
36
Beyond Today
•
B Tm a g a z i n e . o r g
the book of Isaiah, I realized that my church never had
studies or sermons to explain how these wonderful and
beautiful words applied to all of us.
“I wondered, when would the world be such a wonderful place when all people would dwell safely together
in peace and the wild animals would be tamed and all
diseases would be healed? I felt I wasn’t being taught the
‘whole story’ of the Bible.
“Answers to these questions did not come until I was
28 years old and married with four young children.
Through much study and repentance and baptism, I
finally understood that the whole Bible needs to be
accepted as God’s truth and that I cannot just believe and
accept what feels convenient to me or limit the truth to
what I was taught as a child.
“Only then did I begin to understand the promises and
prophecies of Isaiah and other prophetic books of the
Bible. Finally, I knew that God has wonderful promises
that He will keep, prophecies that tell of Christ’s return
to earth, when He will begin a time of restoration and
peace and healing and set up His Kingdom on this earth!
Then, at last, I finally knew the ‘whole story’!”
Let the Bible interpret the Bible!
One awesome proof that the Bible is divinely inspired
is its perfect harmony and consistency all the way
through, even though it was penned by about 40 different
Photo illustration by Shaun Venish/Thinkstock
AND PROMISES
W
writers over a span of 1,500 years! The Bible never contradicts
itself. Even passages that at first glance appear to contradict
others are found, through more thorough study, to not be
contradictory after all.
Many people dismiss most prophecy as being merely
allegorical or symbolic. An important general rule is to take
everything the Bible says as literally true except where it is
clearly symbolic. And remember that everything in the Bible
is important. When you encounter a symbol, try to determine
what it is symbolizing.
These days you can hear many people quoting Bible verses
in espousing certain points of view, but often they are twisting
the meaning as people did even in Peter and Paul’s day (see
2 Peter 3:15-16). Instead, when we are trying to understand
become our Savior and His second coming as glorified King
of Kings (1 Timothy 6:15). During His earthly ministry, His
example and His message also revealed what God the Father
is like.
u When Jesus preached about prophecy, what was His
number one focus?
“Now after John was put in prison, Jesus came to Galilee,
preaching the gospel of the kingdom of God, and saying, ‘The
time is fulfilled, and the kingdom of God is at hand. Repent,
and believe in the gospel’” (Mark 1:14-15).
“But seek first the kingdom of God and His righteousness . . .”
(Matthew 6:33).
“. . . to whom He also presented Himself alive after His suffering by many infallible proofs, being seen
by them during forty days and speaking
of the things pertaining to the kingdom of
God” (Acts 1:3).
The word gospel means “good news.” The
primary message that Jesus and the apostles
preached was “the gospel of the kingdom of
God.” That included God’s offer of eternal
life so that we can “inherit the kingdom of God” (1 Corinthians 15:50). The entire Bible ultimately points toward the
Kingdom of God that would come through Christ.
God has wonderful promises that He will
keep, prophecies that tell of Christ’s return
to this earth, when He will begin a time of
restoration and peace and healing.
a prophetic scripture, we should look for the same subject or
similar wording in other parts of the Bible and never accept an
interpretation that is contradictory.
For example, we are able to understand much about the
book of Revelation by comparing it with the book of Daniel,
Jesus’ Olivet Prophecy and other parts of the Bible.
You’ve probably heard the saying, “History tends to repeat
itself.” That’s true partly because human nature stays the same
and partly because God chooses to repeat His actions in the
ways He deals with mankind.
Thus Bible prophecy is often dual. A prophecy could have
been fulfilled to a certain degree in the past and still await a
more complete fulfillment in the latter days. And studying the
Bible and history to see how God fulfilled a prophecy in the
past can give us a better understanding of how He will likely
fulfill it in the future.
Now let’s take a look at some other keys for understanding
Bible prophecy.
u What is the number one focus of Bible prophecy?
“Then He [the risen Jesus] said to them ).
“‘All things have been delivered to Me by My Father, and
no one knows who the Son is except the Father, and who the
Father is except the Son, and the one to whom the Son wills
to reveal Him.’’” (Luke 10:22-24).
The focus of Bible prophecy is Jesus the Christ (the Messiah). The Bible has many prophecies of His first appearance to
u Can we determine exactly when prophesied events
will happen?
“Now as He sat on the Mount of Olives, the disciples came
to Him privately, saying, ‘Tell us, when will these things be?
And what will be the sign of Your coming, and of the end of
the:3, 32-36).
Jesus Himself plainly tells us that we cannot know in
advance exactly when prophesied events will take place.
Trying to predict exactly when prophetic events will happen
is not a productive use of time. Instead we are to focus on
what and why more than when. We live in a dangerous and
unpredictable world, so we need to always focus on being
spiritually ready to meet our Maker, whether at our death or at
His coming (Matthew 24:44).
u Do we need to learn about the origins and identities of
major nations?
“And Jacob called his sons and said, ‘Gather together, that
I may tell you what shall befall you in the last days. Reuben . . .
unstable as water, you shall not excel . . . The scepter shall not
depart from Judah . . . Zebulun shall . . . become a haven for
ships . . . Dan shall be a serpent by the way . . . Joseph is a fruitful bough . . . his branches run over the wall . . . The blessings
of your father . . . shall be on the head of Joseph . . .’ All these
B Tm a g a z i n e . o r g
•
July-August 2016
37
are the twelve tribes of Israel” (Genesis 49:1-28).
“Understand, son of man, that the vision refers to the time
of the end . . .” (Daniel 8:17-23).
:5-8).
Yes, to understand what is being talked about here, we do
need to know about the historical and prophetic identities of
various peoples. End-time Bible prophecy refers to nations by
their ancient and tribal names that are seldom or never used
today. There is great value in learning history, especially Bible
history. We soon learn that tribes of people that are assumed
to be “lost” are not really lost! Paul and James knew where “the
twelve tribes” of Israel were located (see Acts 26:7; James 1:1).
Jesus even made it clear that they will have a prominent role
after He returns (Matthew 19:28).
Bible prophecy is largely focused on the end time, so we
should expect the Bible to have prophecies of past and present
superpowers like Britain and the United States. Our wellresearched free study guide The United States and Britain in
Bible Prophecy traces the biblical and historical origins of
these important modern nations.
u What necessary change in the nature of man does
prophecy point).
By nature we have a “heart of stone.” We tend to be hardhearted, stubborn and “stiff-necked” (Deuteronomy 9:6).
Indeed, “the heart is deceitful above all things, and desperately
wicked” (Jeremiah 17:9). That has to change. Once God has
opened our minds, He helps us with His Spirit to see ourselves
as we really are.
He leads us to repent of our sins so He can forgive us. He
helps us to become spiritually converted. After conversion, a
person begins to have a “heart of flesh”—a soft, humble and
teachable heart. Then God’s promise is fulfilled: “I will put
My law in their minds, and write it on their hearts” (Jeremiah
31:33).
u What are some things to consider to have a spiritually
balanced interest in prophecy?
“But take heed; see, I have told you all things beforehand”
(Mark 13:23).
).
u Are prophecies often conditional?
Many people mostly ignore Bible prophecy and miss out on
“Now it shall come to pass, if you diligently obey the voice
its
benefits—the biblical worldview, the motivation and the
of the Lord your God, to observe carefully all His comcomfort
it can give us. Concerning His prophetic teaching,
mandments which I command you today, that the Lord your
Jesus
said
to “take heed.” On the other hand, some dwell far
God will set you high above all nations of the earth. And all
too
much
on
the technical aspects of Bible prophecy, such as
these blessings [that follow in the prophecy] shall come upon
trying to figure out exact dates, while neglecting matters much
you and overtake you, because you obey the voice of the
more important, like loving God and loving other people.
Lord your God . . .
When you are reading prophetic sections of the Bible, be sure
“But it shall come to pass, if you do not obey the voice of
to
meditate
on the spiritual lessons they convey. For example,
the Lord your God, to observe carefully all His commandmany
people
assume that the book of Revelation is all predicments and His statutes which I command you today, that all
tions.
In
actuality,
it is full of valuable spiritual teaching.
these curses [that follow] will come upon you and overtake
you” (Deuteronomy 28:1-2, 15, emphasis added throughout). Apply now
“The instant I speak concerning a nation and concerning
For a good introduction to the book of Revelation, read Reva kingdom, to pluck up, to pull down, and to destroy it, if
elation 1:3. People will be blessed if they read this book, if they
that nation against whom I have spoken turns from its evil,
hear and pay attention to it and if they keep (obey) its spiritual
I will relent of the disaster that I thought to bring upon it.
And the instant I speak concerning a nation and concerning directives! If you have a red-letter Bible, you will notice that
a kingdom, to build and to plant it, if it does evil in My sight much of the first three chapters of this book is direct quotation
from Jesus Christ.
so that it does not obey My voice, then I will relent concernThen read Revelation 22:12-14 in the King James or New
ing the good with which I said I would benefit it” (Jeremiah
King James Version. This very last chapter of the Bible gives
18:7-10; compare Jonah 3).
a grand overview. Think of how simple and clear this is. The
Yes, what happens to nations, and to each of us personally,
Bible is consistent all the way through. God wants His people
depends on whether we choose to obey or disobey God. God
said, “I have set before you life and death, blessing and cursing; to show their love by obeying Him—for our own good!
What will you do today to show God that you love and obey
therefore choose life” (Deuteronomy 30:19). It’s a matter of
Him? Will you search His Word to learn what He wants you to
cause and effect. God lets us choose what to sow, but we will
do and then begin doing it?
reap what we sow (Galatians 6:7-8).
38
Beyond Today
•
B Tm a g a z i n e . o r g
®
Worldwide Television Airtimes
For the most current airing times,
or to download or view new and
archived programs online, visit
BeyondToday.tv
UNITED STATES
NATIONWIDE CABLE TV
cable operators—and another 9 million homes
on Sky TV in the United Kingdom.
BROADCAST TV
Alaska
Anchorage
ch. 18, Tue 9 p.m.
California
The Word Network
San Diego ch. 18, 19, 23, Mon 5 p.m.
San Francisco ch. 29, Sun 6:30 p.m.
View on cable at the following times:
North Carolina
Sun 11 a.m. ET, 10 a.m. CT, 9 a.m. MT, 8 a.m. PT;
Fri 4 p.m. ET, 3 p.m. CT, 2 p.m. MT, 1 p.m. PT
The Word Network is available in over 200 countries, reaching viewers in Europe, Africa, Asia,
Australia and the Americas. It reaches 86 million
homes in the United States alone through DirecTV,
Comcast, Time Warner Cable, Bright House
Networks, Cox, Cablevision, Charter and other
Durham
ch. 18, 97-3 Wed 7:30 a.m.
Ohio
Toledo
ch. 69, Sun 5 p.m.
Oregon
Gresham/East Portland ch. 22/23, Sun 7:30 p.m.
Milwaukee
ch. 23, Sun 6 a.m.; Mon 11:30
p.m., Wed 4:30 p.m.; Thu 7 a.m.; Fri 5:30 a.m.;
Sat 8:30 a.m. & 4:30 p.m.
July-August 2016
®
Volume 21, Number 4
Circulation: XXX,000
Beyond Today (ISSN: 1086-9514) is published bimonthly by the United Church of God,
an International Association, 555 Technecenter Dr., Milford, OH 45150. © 2016 United
Church of God, an International Association. Printed in U.S.A. All rights reserved. Repro
duction in any form without written permission is prohibited. Periodicals Postage paid at
Milford, Ohio 45150, and at additional mailing offices. Scriptural references are from the
New King James Version (© 1988 Thomas Nelson, Inc., publishers) unless otherwise noted.
Publisher: United Church of God, an International Association
Council of Elders: Scott Ashley, Bill Bradford, Jorge de Campos, Aaron Dean,
Robert Dick, John Elliott, Mark Mickelson, Mario Seiglie, Rex Sexton,
Don Ward (chairman), Anthony Wasilkoff, Robin Webber
Church president: Victor Kubik Media operation manager: Peter Eddington
Managing editor: Scott Ashley Senior writers: Jerold Aust, John LaBissoniere,
Darris McNeely, Steve Myers, Gary Petty, Tom Robinson Copy editors: Milan Bizic,
Tom Robinson Art director: Shaun Venish Circulation manager: John LaBissoniere
To request a free subscription, visit our website at BTmagazine.org or contact the office nearest
you from the list below. Beyond Today is sent free to all who request it. Your subscription is provided
by the voluntary contributions of members of the United Church of God, an International Association,
and others.
Personal contact: The United Church of God has congregations and ministers throughout the United
States and many other countries. To contact a minister or to find locations and times of services, contact
our office nearest you or visit our website at ucg.org/churches.
Unsolicited materials: Due to staffing limitations, unsolicited materials sent to Beyond Today will
not be critiqued or returned. By their submission authors agree that submitted materials become the
property of the United Church of God, an International Association, to use as it sees fit. This agreement
is controlled by California law.
NORTH, SOUTH AND CENTRAL AMERICA
United States: United Church of God, P.O. Box 541027, Cincinnati, OH 45254-1027
Phone: (513) 576-9796 Fax (513) 576-9795 Website: BTmagazine.org E-mail: info@ucg.org
Canada: United Church of God–Canada, Box 144, Station D, Etobicoke, ON M9A 4X1, Canada
Phone: (905) 614-1234, (800) 338-7779 Fax: (905) 614-1749 Website: ucg.ca
Caribbean islands: United Church of God, P.O. Box 541027, Cincinnati, OH 45254-1027
Phone: (513) 576-9796 Fax (513) 576-9795 Website: BTmagazine.org E-mail: info@ucg.org
Spanish-speaking areas: Iglesia de Dios Unida, P.O. Box 541027, Cincinnati, OH 45254-1027, U.S.A.
Phone: (513) 576-9796 Fax (513) 576-9795 Website: ucg.org/espanol: goodnews.org.uk
Oregon City ch. 23, Sun 2:30 p.m.; Thu 10:30
a.m. & 2:30 p.m.; Fri 4:30 a.m.; Sat 3 a.m. & 4 p.m.
Washington
Everett
Sat & Sun 8:00 a.m. (NSW, VIC, ACT, QLD)
Sat & Sun 7:30 a.m. (SA)
Sat & Sun 6:00 a.m. (WA)
ch. 77, Wed 5 p.m.
NEW ZEALAND
Wisconsin
Kenosha
ch. 14, Sun 7:30 p.m.; Mon 7:30 p.m.
Milwaukee
ch. 96, Mon 2 p.m.; Tue 7 p.m.;
Wed 2 p.m. ch. 55, Sun 8 a.m.
CANADA
NATIONWIDE CABLE TV
Prime Television
(simulcast on Sky satellite platform) Sun 7 a.m.
SOUTH AFRICA
Cape Town DSTV
Sun 8:30 a.m. ch. 263 and open ch. 32, 67
Vision TV
Hope TV
Sun 6 p.m. ET
Sun 1 p.m. ET
See local listing for the channel in your area.
ST. LUCIA
Sun 9 a.m.
AUSTRALIA
Ch. 4ME—Digital 74 Metro, Digital 64 Regional
ch. DBS
TRINIDAD AND TOBAGO
2nd, 4th Sundays
CCN TV6 at 9:00 a.m. Website: labuonanotizia.org E-mail: info@labuonanotizia.org
Scandinavia: Guds Enade Kyrka, P.O. Box 541027, Cincinnati, OH 45254-1027
AFRICA
Cameroon: United Church of God Cameroon, BP 10322 Béssengue, Douala, Cameroon
East Africa, Madagascar and Mauritius: United Church of God–East Africa
P.O. Box 75261, Nairobi 00200, Kenya E-mail: kenya@ucg.org Website: ucgeastafrica.org
Ghana: P.O. Box AF 75, Adenta, Accra, Ghana E-mail: ghana@ucg.org
Malawi: P.O. Box 32257, Chichiri, Blantyre 3, Malawi Phone: +265 (0) 999 823 523
Nigeria: United Church of God–Nigeria, P.O. Box 2265 Somolu, Lagos, Nigeria
Phone: 8033233193 Website: ucgnigeria.org E-mail: nigeria@ucg.org
South Africa: United Church of God–Southern Africa, P.O. Box 1181, Tzaneen 0850, South Africa
Phone: +27 79 725 9453 Fax: +27 (0)86 572 7437 Website: south-africa.ucg.org
Zambia: P.O. Box 23076, Kitwe, Zambia Phone: (0026)0966925840 E-mail: zambia@ucg.org
Zimbabwe: P.O. Box 928, Causeway, Harare, Zimbabwe Phone: 0773 240 041
PACIFIC REGION
Australia and all other South Pacific regions not listed: United Church of God–Australia
GPO Box 535, Brisbane, Qld. 4001, Australia Phone: 07 55 202 111 Free call: 1800 356 202
New Zealand: United Church of God, P.O. Box 22, Shortland St., Auckland 1140, New Zealand
Phone: Toll-free 0508-463-763 Website: ucg.org.nz E-mail: info@ucg.org.nz
Tonga: United Church of God–Tonga, P.O. Box 518, Nuku`alofa, Tong 4774, MCPO, 1287 Makati City, Philippines Phone: +63 (2) 804-4444
Cell/text: +63 918-904-4444 Website: ucg.org.ph E-mail: info@ucg.org.ph
Singapore: United Church of God, GPO Box 535, Brisbane, Qld. 4001, Australia
Website: ucg-singapore.org E-mail: info@ucg.org.au
ALL AREAS AND NATIONS NOT LISTED
United Church of God, P.O. Box 541027, Cincinnati, OH 45254-1027
Phone: (513) 576-9796 Fax (513) 576-9795 Website: BTmagazine.org E-mail: info@ucg.org
Canada Post Publications Mail Agreement Number 40026236.
Canada return address: Beyond Today, 2835 Kew Drive, Windsor, ON N8T 3B7.
Address changes: POSTMASTER—Send address changes to
Beyond Today, Box 541027, Cincinnati, OH 45254-1027.
B Tm a g a z i n e . o r g
•
July-August 2016
39
Watch the Beyond Today TV program!
The Word Network
On Cable: Friday 4 p.m. ET, 3 p.m. CT, 2 p.m. MT, 1 p.m. PT
Sunday 11 a.m. ET, 10 a.m. CT, 9 a.m. MT, 8 a.m. PT
The Word Network is available in over 200 countries, reaching viewers in Europe, Africa, Asia,
Australia and the Americas. It reaches homes in the U.S. through DirecTV, Comcast, Time
Warner Cable, Bright House Networks, Cox, Cablevision, Charter and other cable operators—
and homes on Sky TV in the U.K.
W
hat question has more impact on your life, your future, your decisions, your plans than any
other? What question has the greatest bearing on your family, your
relationships, on everything you do?
The most vital question of all time is this: Does God exist?
If there is no God, then we are free to do as we please, Creator have
a purpose and plan for us? These questions are crucial!
Can you know whether God exists? In this eye-opening study guide you’ll
be amazed to learn what many scientists admit. You’ll discover many scientific finds that point to one inescapable conclusion: The universe is the result of an intelligence far
greater than anything we can imagine. Be sure to request your free copy today!
Request or download your free copy at BTmagazine.org/booklets
Reader Updates
Go to ucg.org/btupdate to sign up for e-mail updates including breaking news, important announcements and more from the publishers of Beyond Today.
Printed in the U.S.A.
Canada Post Publications Mail Agreement Number 40026236
® | https://pt.scribd.com/doc/317056856/Beyond-Today-Magazine-July-August-2016 | CC-MAIN-2017-34 | refinedweb | 29,240 | 61.56 |
I am just getting started with Rails 3 and I do not quite understand
how to go about renaming routes.
What I want:
To rename the path to the users#show controller/action pair. So
instead of the URL being it would just be
In the future, I’d also like to be able to add additional paths onto
the end such as:
How my user resources are set up:
resources :users, :except => [:destroy] do resources :favorites, :only => [:show, :update] resources :profiles, :only => [:show, :update] end
What I tried:
match :home, :to => 'users#show'
What happened:
ActiveRecord::RecordNotFound in UsersController#show Couldn't find User without an ID
What’s in the development.log file:
Started GET "/home" for 127.0.0.1 at 2011-03-10 13:36:15 -0500 Processing by UsersController#show as HTML [1m[35mUser Load (1.6ms)[0m SELECT "users".* FROM "users" WHERE
(“users”.“id” = 101) LIMIT 1
Completed in 192ms
ActiveRecord::RecordNotFound (Couldn't find User without an ID): app/controllers/users_controller.rb:19:in `show'
What’s in the user controller:
def show @user = User.find(params[:id]) respond_to do |format| format.html # show.html.haml end end
So, apparently it is storing the user id, as shown in the development
log as 101 but for whatever reason I am still getting this error?
Any help you can provide is greatly appreciated! | https://www.ruby-forum.com/t/rails-3-renaming-routes/204378 | CC-MAIN-2021-25 | refinedweb | 229 | 63.49 |
I.
Step 1: Gathering Supplies and Tools.
Components.
This project is built on a cheep hobby board from my local DIY store. The board measure 850mm wide by 500mm high and 18mm deep.
The LEDs used in this project are 5050 WS2812b mounted on circular PCBs that are approximately 9mm in diameter with solder pads on the rear.
I am using an Arduino Pro Mini compatible micro controller. Its the 5V 16 MHZ version. I chose this one as it has a super slim design, small foot print and all the nessary ports plus some spare for future upgrades. Its also 5 volt so I can use a single power supply for the LEDs, Micro controller and RTC
The time keeping is taken care of by an RTC (Real Time Clock) module that features the DS3231 chip. This chip is very accurate so the time shouldn't drift too much.
Also used:
Wire. Solder and hot glue.
Tools:
Power drill and wood drill bits (10mm and 5mm)
Soldering iron
Hot glue gun
wire snipps
Dremel and plunge router accessories
Step 2: Marking, Drilling and Routing
Drilling
- Using a strait edge find the centre of the board by drawing a line from opposite corners.
- Mark 3 circles using a piece of string and a pen. The outer most circle should be about 20mm from the edge of the board with the other 2 lines moving in by 15mm from the last line.
- I used a printed clock face to help me mark out the positions of each of the minutes and seconds on the outer 2 lines and hours on the inner line.
- Drill 10mm holes approximately 5mm deep for every hour, minute and second.
- Use the 5mm drill to make holes though the board for hour, minute and second.
Routing
Although this step is not necessary it will allow for the clock to be fitted flush to a wall.
- Using a router and circle guide route wire channels in the board
- Mark out and route a recess for the RTC and Micro Controller to live in.
- Route a channel from the outer lines to the recess for wires
Step 3: So Much Soldiering, Cutting and Stripping.
This next part is much easer to say than to do. My advice would be note to rush it. try and find a system and get into a rhythm.
Each of the LEDs needs 5 volts in, 5 volts out, Data in, Data out, Ground in and Ground out. including power for the micro controller and RTC its over 400 wires, all stripped and soldered at both ends.
A tacky blue substance is very useful for this step.
- I started by placing 2 LEDs in their holes next to each other to work out the length of wire needed to connect to each other.
- Using the 1st piece of wire as a guide I then cut 60 of each colour wire.
- Strip 2mm of sleeving from the ends of each wire and tin them with solder.
- Solder a small blob of solder onto each of the LED pads.
- Solder the wires to the LEDs to form two chains of 60 for the minutes and seconds and one chain of 12 for the hours. I used red wire for 5V, yellow for data and blue for ground.
- Take care to connect each Data Out (DOUT) to the Data In (DIN) of the next LED
- The last led in each chain dose not need a data out wire.
Once all the chains are completed its a good idea to test them before installing them. I used my Arduino UNO and the Adafruit NeoPixel Strand Test to confirm each LED was working.
Solder longer wires onto each of the chains for 5V, Ground and Data in.
At this point there should be five 5v wires, three Data wires connected to the Arduino Pro Mini and 5 Ground wires.
Strip 5mm from the ends of the 5v wires and solder them all together and repeat for the Ground wires.
After completing the three chains solder a 5V wire on to RAW pin of the Arduino Pro Mini and also onto the VCC pin for the RTC. A Ground wire to GND on the Arduino Pro Mini and RTC and then 2 more wires:
SCL from the RTC to A5 on the Pro Mini
SDA from the RTC to A4 on the Pro Mini
The data lines from the LEDs should connect to:
- Seconds - Digital Pin 3.
- Minutes - DigitalPin 4
- Hours - DigitalPin 5
Step 4: Installing
Once soldered, installing the LEDs in their holes should be straight forward. The LEDs need to be installed so the data runs around anti-Clockwise when looking at it from the back as the code is setup front facing.
I used a tiny amount of hot glue to hold them down as I want to be able to replace a single LED if it fails in the future.
I also used hot glue to keep all the wires neat and tidy and to fix the barrel connector to the board.
There are a number of arduino pro mini programming guides available. I use the external USB to serial converter method to load this code onto the Arduino:
This code will also set the time on the RTC to the time that it was compiled. so its important to just hut the upload button so it complies and uploads as quickly as possible.
Much of this code was borrowed from the NeoPixel Ring Clock by Andy Doro. Some from the Adafruit NeoPixel Strand Test and some I put together.
You will need to have installed a few libraries. They are available from the Libraries Manager on the Arduino software.
The Adafruit NeoPixel for the ws2812b LEDs
Wire for talking to the RTC over I2C (this is built in as standard)
and RTClib for knowing what to ask the RTC
/**************************************************************************
* * NeoPixel Ring Clock by Andy Doro (mail@andydoro.com) * * **************************************************************************
Revision History
Date By What 20140320 AFD First draft 20160105 AFD Faded arcs 20160916 AFD Trinket compatible 20170727 AFD added STARTPIXEL for 3D enclosure, variable starting point, added automatic DST support 20180424 AFD using DST library *
/ include the library code: #include <Wire.h> #include <RTClib.h>
#include <Adafruit_NeoPixel.h>
// define pins #define SECPIN 3 #define MINPIN 4 #define HOUPIN 5
#define BRIGHTNESS 20 // set max brightness
#define r 10 #define g 10 #define b 10 RTC_DS3231 rtc; // Establish clock object
Adafruit_NeoPixel stripS = Adafruit_NeoPixel(60, SECPIN, NEO_GRB + NEO_KHZ800); // strip object Adafruit_NeoPixel stripM = Adafruit_NeoPixel(60, MINPIN, NEO_GRB + NEO_KHZ800); // strip object Adafruit_NeoPixel stripH = Adafruit_NeoPixel(24, HOUPIN, NEO_GRB + NEO_KHZ800); // strip object byte pixelColorRed, pixelColorGreen, pixelColorBlue; // holds color values
void setup () { Wire.begin(); // Begin I2C rtc.begin(); // begin clock
Serial.begin(9600); // set pinmodes pinMode(SECPIN, OUTPUT); pinMode(MINPIN, OUTPUT); pinMode(HOUPIN, OUTPUT);
if (rtc.lostPower()) { Serial.println("RTC lost power, lets set the time!"); // following line sets the RTC to the date & time this sketch was compiled rtc.adjust(DateTime(F(__DATE__), F(__TIME__))); // This line sets the RTC with an explicit date & time, for example to set // January 21, 2014 at 3am you would call: // rtc.adjust(DateTime(2014, 1, 21, 3, 0, 0)); }
stripS.begin(); stripM.begin(); stripH.begin(); //strip.show(); // Initialize all pixels to 'off'
// startup sequence delay(500);
colorWipeS(stripS.Color(0, g, 0), 5); // Blue colorWipeM(stripM.Color(r, 0, 0), 5); // Blue colorWipeH(stripH.Color(0, 0, b), 50); // Blue
delay(1000); DateTime theTime = rtc.now(); // takes into account DST byte secondval = theTime.second(); // get seconds byte minuteval = theTime.minute(); // get minutes int hourval = theTime.hour(); hourval = hourval % 12; // This clock is 12 hour, if 13-23, convert to 0-11`
for (uint16_t i = 0; i < secondval ; i++) { stripS.setPixelColor(i, 0,0,b); stripS.show(); delay(5); }
for (uint16_t i = 0; i < minuteval ; i++) { stripM.setPixelColor(i, 0,g,0); stripM.show(); delay(5); }
for (uint16_t i = 0; i < hourval ; i++) { stripH.setPixelColor(i, r,0,0); stripH.show(); delay(5); }
}
void loop () {
// get time DateTime theTime = rtc.now(); // takes into account DST
byte secondval = theTime.second(); // get seconds byte minuteval = theTime.minute(); // get minutes int hourval = theTime.hour(); // get hours hourval = hourval % 12; // This clock is 12 hour, if 13-23, convert to 0-11`
stripS.setPixelColor(secondval, 0,0,20); stripS.show(); delay(10); if (secondval ==59 ) { for (uint8_t i = stripS.numPixels(); i > 0; i--) { stripS.setPixelColor(i, 0,g,0); stripS.show(); delay(16);} }
stripM.setPixelColor(minuteval, 0,g,0); stripM.show(); delay(10); if (secondval ==59 && minuteval == 59) { for (uint8_t i = stripM.numPixels(); i > 0; i--) { stripM.setPixelColor(i, r,0,0); stripM.show(); delay(16);} }
stripH.setPixelColor(hourval, r,0,0); stripH.show(); delay(10); if (secondval == 59 && minuteval == 59 && hourval == 11) { for (uint8_t i = stripH.numPixels(); i > 0; i--) { stripH.setPixelColor(i, 0,0,b); stripH.show(); delay(83);} } // for serial debugging Serial.print(hourval, DEC); Serial.print(':'); Serial.print(minuteval, DEC); Serial.print(':'); Serial.println(secondval, DEC); }
// Fill the dots one after the other with a color void colorWipeS(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < stripS.numPixels(); i++) { stripS.setPixelColor(i, c); stripS.show(); delay(wait); } }
void colorWipeM(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < stripM.numPixels(); i++) { stripM.setPixelColor(i, c); stripM.show(); delay(wait); } }
void colorWipeH(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < stripH.numPixels(); i++) { stripH.setPixelColor(i, c); stripH.show(); delay(wait); } }
Step 5: Final Touches
All that should be left now is to fix down the RTC and Micro Controller down in the recess.
I have fitted the RTC battery side up so I can easily change the battery if needed.
Connect the 5v wires to the + side of the connector and the Ground to the - side
Power it UP!
I have mine connected to a USB battery bank but A USB phone charger would work just as well.
Note:
The brightness of the LEDs is set in the code. It has been set low to keep the current draw low. At full brightness with all the LEDs lit it could draw nearly 8 amps. With the current setup its less than 1.
Runner Up in the
Clocks Contest
19 Discussions
Question 7 months ago on Step 5
I have been working on the clock. Its coming along nicely. I want to change the start color of the 60 second position and the 12 o'clock position to blue. Can you help me identify the line of code that will do this? and - or - can I do this?
Reply 6 months ago
Hello again,
Sorry for the delay getting back to you.
I have gone very the code and listed all the lines that change the colours below:
Line 16 defines Red, line 17 defines Green and line 18 defines Blue
You can change the colour anywhere this line appears :
stripS.setPixelColor(i, X,X,X);
by changing the X,X,X to RGB values from 0 to 255 or changes the values on Line 16,17 and 18
during setup all the strips are set to the colurs defined on line16,17 & 18 but can be changed to anything you like (lines 54,55 &56)
Still in Setup. from line 67 through to line 83 the strips are set to show the current H,M & S. you can also change the colour values to anything you like.
in the main loop:
line 98 changes the colour the seconds change to as it goes round (0,0,20 is dim blue) but looking at it it should be "b" so its set from line 18
line 105 is the colour the seconds change back to at 59 seconds.
line 111 sets the colour for the minutes as it changes and line 118 changes it back
line 124 sets the colour for the hours and line 131 set the colour when the Hours are 11 & Minutes = 59 & seconds = 59
Hope this helps.
Answer 7 months ago
Hello, the awesome thing with addressable LEDs is you can make any one any colour you like.
I will go through the code later today and get back to you with a detailed answer.
Question 8 months ago
next question: in the instructable you show the data connections as; seconds to pin 3 - minutes to pin 4 and hours to pin 5.
In the Fritzing schematic that you made, it shows pin 3 to hours and pin 5 to seconds.
may I guess (and or assume) that the instructable is the correct version???
Answer 8 months ago
I went into the IDE and read the sketch program
the instructable is correct - the error is in the Fritzing schematic
Question 8 months ago
OK - I think I get it. 60 and Zero are the same position.
because the clock starts counting from zero - not 1 - yes??
Question 8 months ago
In the second photo - Step 4 - you show the LED's starting at the 12:00 position, which is also the sixty second and sixty minute position. I was wondering if the LED's should start from the "1" position?
10 months ago
I solved the upload issue - libraries were in the wrong place - put things where they needed to be and uploaded to the pro mini - no problemo - I am now working on a 3D print file for the clock face - I'll let you know how it turns out.
Reply 10 months ago
That’s great news.
I think I 3D print will be awesome. Can’t wait to see it
Question 10 months ago on Step 5
The bottom right photo in step 5 shows a blue and a red wire from the end of the last hour LED (11 O'clock position) to the power connections. This is not shown on the schematic you included. The wires to minutes and seconds end at the end of the chain. Can you please explain?
Answer 10 months ago
Hi,
Sorry about that. I was a little concerned that there might be some brownout on the LED at the end of the chains from the voltage drop I thought that creating a continuous loop might help to mitigate it. However when I tested a full chain of 60 I found that it wasn’t a problem and decided not to include it on the other 2 chains.
You can safely ignore the red and blue wires you can see in the photo.
Question 10 months ago
So - I managed to fix the "unterminated comment" issue
now I get "error compiling for board Arduino Pro or Pro Mini"
Reply 10 months ago
Sorry again for the trouble.
I have tried to reproduce this on my end but it seams to be working fine.
Question 10 months ago on Step 4
when I try to load the NeoPixel Ring Clock program onto my pro mini - I get an error message that says:
unterminated comment
do you know how can I fix this?
Reply 10 months ago
Hello, Sorry to hear about your trouble.
It looks like a couple of / got lost. My bad..
The top of the sketch should look like the screen shot.
Note the / after the last * on line 19
and the missing / on line 21
10 months ago
Looking at this the first time, I thought for sure it was drilled with CNC or at least a drill press. The LEDs are spaced and aligned perfectly - nice job doing this with a hand drill!
10 months ago
Best clock in the contest ever. It will be better if you attach the wire diagram to make others understand well. Thank you for sharing! I voted it :-)
Reply 10 months ago
Thanks so much for your kind comments and vote! I have created a wire diagram to hopefully make it easier to follow and I will add it to the instructable soon.
10 months ago
This is so cool! Thanks for sharing! | https://www.instructables.com/id/132-Pixel-Clock/ | CC-MAIN-2019-26 | refinedweb | 2,652 | 81.33 |
okay so enclosed is the program i am working on for class, basically what it does is it will take the provided integers and get the avg of them as well as the difference, smallest and largest number; but my problem is that i need it to be where if the user enters a "0" then the program will go to the end and basically still show the avg of the numbers up till 0, the smallest, largest and difference as well, so if someone could please help/ teach me to do this it would be mostly appreciated. by the way i hope i posted this properly i am just getting used to writing codes.
--------------------------------------------------------------
----------------------------------------------------------------------------Code://Charles Shane Miller Jr. #include <cstdlib> #include <iostream> using namespace std; int main() { int one; // integer uno int two; // integer dos int three; // integer tres int four; // integer cuatro int five; // integer cinco int avg; // average int smallest; int largest; int diff; int zero; cout << "So what is lucky number one?" << endl; cin >> one; cout << "okay and what shall the terrible second integer be?" << endl; cin >> two; cout << "So what is unlucky number 3?" << endl; cin >> three; cout << "okay and que es numero cuatro por favor?" << endl; cin >> four; cout << "and last but not least, what is lucky number five? " << endl; cin >> five; avg = one * two * three * four * five / 5 ; //formula to take integers and get the average cout << "" << endl; cout <<"The average of the numbers is " << avg << endl; // outputs the answer in a user friendly way cout << "" << endl; if ( one < two ) // start of algorithm to find smallest number smallest = one; else smallest = two; if ( smallest < three ) smallest = smallest; else smallest = three; if ( smallest < four ) smallest = smallest; else smallest = four; if ( smallest < five ) smallest = smallest; else smallest = five; cout << "the smallest is " << smallest << endl; // end of algorithm to find smallest number cout << "" << endl; if ( one > two ) // start of algorithm to find largest number largest = one; else largest = two; if ( largest > three ) largest = largest; else largest = three; if ( largest > four ) largest = largest; else largest = four; if ( largest > five ) largest = largest; else largest = five; if ( 0 ) break; else { continue; } cout << "the largest is " << largest << endl; cout << " " << endl; diff = largest - smallest; cout << "The difference of the laregst number and the smallest number is " << diff << endl; system("PAUSE"); while (zero != 0); return 0; } | https://cboard.cprogramming.com/cplusplus-programming/112120-help-very-basic-cplusplus-program.html | CC-MAIN-2017-26 | refinedweb | 385 | 57.98 |
#include <CDR_Stream.h>
#include <CDR_Stream.h>
List of all members.
This class is a base class for defining codeset translation routines to handle the character set translations required by both CDR Input streams and CDR Output streams.
[virtual]
[protected]
Exposes the stream implementation of <adjust>, this is useful in many cases to minimize memory allocations during marshaling. On success <buf> will contain a contiguous area in the CDR stream that can hold <size> bytes aligned to <align>. Results
Used by derived classes to set errors in the CDR stream.
Obtain the CDR Stream's major & minor version values.
[pure virtual]
Children have access to low-level routines because they cannot use read_char or something similar (it would recurse).
Efficiently read <length> elements of size <size> each from <input> into <x>; the data must be aligned to <align>.
Efficiently write <length> elements of size <size> from <x> into <output>. Before inserting the elements enough padding is added to ensure that the elements will be aligned to <align> in the stream. | https://www.dre.vanderbilt.edu/Doxygen/5.4.7/html/ace/classACE__WChar__Codeset__Translator.html | CC-MAIN-2022-40 | refinedweb | 169 | 64.71 |
Hello,Deductible student loan interest is defined as:
Interest paid on a loan that was taken out for you, your spouse, or a person who was your dependent when you took out the loan,Interest paid within a reasonable amount of time before or after you took out the loan,and interest paid on a loan that was for eligible education expenses during an academic period for the eligible student.If your daughter was your dependent when you took out the loan and the interest paid meets the other two requirements, you may deduct the student loan interest. For additional information on deductible student loan interest, please follow this link:. Student loan interest information begins on page 29. I hope this helps.
Did not answere my question can I deduct the payment for the loan
Was your daughter your dependent at the time you took out the loan?
Yes, I already deduct interest, can I deduct payments?
Unfortunately no. Payments are not deductible due to the fact that education expenses could have been used as deductions or credits when she was a student. If education credits/deductions were taken previously, this would cause the expenses to be a double tax benefit.
Was not taken at the time
What years did she attend college?
2003-2004
The rules requiring education expenses permit for the expense of the education to be taken while attending college. If the expenses were not used for deductions or credits at the time, they are forfeited. If the expenses had been omitted on returns within the last 3 years, the returns could have been amended to claim the expenses. However, returns may only be amended for 3 years after the due date of the return (as of now, you can only go back to 2008).
Ok, thanks for your help, thats what I needed to know | http://www.justanswer.com/tax/66beu-years-ago-took-education-loan-daughter-12000.html | CC-MAIN-2015-14 | refinedweb | 309 | 59.43 |
Saturday, October 27, 2012
#
Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement.
If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network…
What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time.
Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested.
A simple example of a Theme cache might look like this:
using System;
using System.Collections.Generic;
using System.Drawing;
struct Theme
{
public Color Color;
public Font Font;
}
static class ThemeCache
{
static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme>
{
{"Default", new Theme { Color = Color.AliceBlue }},
{"Theme12", new Theme { Color = Color.Aqua }},
};
public static Theme Get(string theme)
{
Theme cached = _Cache[theme];
if (cached.Font == null)
{
Console.WriteLine("Creating new font");
cached.Font = new Font("Arial", 8);
}
return cached;
}
}
class Program
{
static void Main(string[] args)
{
Theme item = ThemeCache.Get("Theme12");
item = ThemeCache.Get("Theme12");
}
}
This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right?
Wrong!
The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector.
The code
Theme cached = _Cache[theme];
if (cached.Font == null)
{
Console.WriteLine("Creating new font");
cached.Font = new Font("Arial", 8);
}
does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that
That was the story of the non caching cache. Next time I will write something something about measuring.
Skin design by Mark Wagner, Adapted by David Vidmar | http://geekswithblogs.net/akraus1/archive/2012/10/27.aspx | CC-MAIN-2015-48 | refinedweb | 828 | 65.42 |
DbgShell - A PowerShell Front-End For The Windows Debugger Engine
A PowerShell front-end for the Windows debugger engine.
Ready to tab your way to glory? For a quicker intro, take a look at Getting Started.
Disclaimers
- This project is not produced, endorsed, or monitored by the Windows debugger team. While the debugger team welcomes feedback about their API and front ends (windbg, kd, et al), they have no connection with this project. Do not file bugs or feedback to the debugger team concerning this project.
- This is not a funded project: it has no official resources allocated to it, and is only worked on by volunteers. Do not take any production dependency on this project unless you are willing to support it completely yourself. Feel free to file Issues and submit Pull Requests, but understand that with the limited volunteer resources, it may be a while before your submissions are handled.
- This is an experimental project: it is not fully baked, and you should expect breaking changes to be made often.
Binaries
Motivation
Have you ever tried automating anything in the debugger? (cdb/ntsd/kd/windbg) How did that go for you?
The main impetus for DbgShell is that it's just waaaay too hard to automate anything in the debugger. There are facilities today to assist in automating the debugger, of course. But in my opinion they are not meeting people's needs.
- Using the built-in scripting language is arcane, limited, difficult to get right, and difficult to get help with.
- DScript is kind of neat, but virtually unknown, and it lacks a REPL, and it's too low-level.
- Writing a full-blown debugger extension DLL is very powerful, but it's a significant investment—way too expensive for solving quick, "one-off" problems as you debug random, real-world problems. Despite the cost, there are a large number of debugger extensions in existence. I think there should not be nearly so many; I think the only reason there are so many is because there aren't viable alternatives.
- Existing attempts at providing a better interface (such as PowerDbg) are based on "scraping" and text parsing, which is hugely limiting (not to mention idealogically annoying) and thus are not able to fulfill the promise of a truly better interface (they are only marginally better, at best).
- Existing attempts to provide an easier way to write a debugger extension are merely a stop-gap addressing the pain of developing a debugger extension; they don't really solve the larger problem. (for instance, two major shortcomings are: they are still too low-level (you have to deal with the dbgeng COM API), and there's no REPL)
- The debugger team has recently introduce Javascript scripting. Javascript is a much better (and more well-defined) language than the old windbg scripting language, but I think that PowerShell has some advantages, the largest of which is that nobody really uses a Javascript shell--PowerShell is much better as a combined shell and scripting language.
The DbgShell project provides a PowerShell front-end for dbgeng.dll, including:
- a managed "object model" (usable from C# if you wished), which is higher-level than the dbgeng COM API,
- a PowerShell "navigation provider", which exposes aspects of a debugging target as a hierarchical namespace (so you can "cd" to a particular thread, type "dir" to see the stack, "cd" into a frame, do another "dir" to see locals/registers/etc.),
- cmdlets for manipulating the target,
- a custom PowerShell host which allows better control of the debugger CLI experience, as well as providing features not available in the standard powershell.exe host (namely, support for text colorization using ANSI escape codes (a la ISO/IEC 6429))
!DbgShell).
In addition to making automation much easier and more powerful, it will address other concerns as well, such as ease of use for people who don't have to use the debuggers so often. (one complaint I've heard is that "when I end up needing to use windbg, I spend all my time in the .CHM")
For seasoned windbg users, on the other hand, another goal is to make the transition as seamless as possible. So, for instance, the namespace provider is not the only way to access data; you can still use traditional commands like "
~3 s", "
k", etc.
Screenshots
Notable Features
- Color: support for text colorization using ANSI escape codes (a la ISO/IEC 6429)
- Custom formatting engine: Don't like .ps1xml stuff? Me neither. In addition to standard table, list, and custom views, you can define "single-line" views which are very handy for customizing symbol value displays.
- Custom symbol value conversion: For most variables, the default conversion and display are good. But sometimes, you'd like the debugger to do a little more work for you. The symbol value conversion feature allows, for instance, STL collection objects to be transformed into .NET collection objects that are much easier to deal with.
- Derived type detection: For when your variable is an IFoo, but the actual object is a FooImpl.
- Rich type information: exposed for your programmatic pleasure.
- Q: Does it work in WinDbg? I will only use WinDbg. A: Yes--load up the DbgShellExt.dll extension DLL, and then run "
!dbgshell" to pop open a DbgShell console.
Other topics
- Getting Started with DbgShell
- Color
- Custom formatting engine
- Custom symbol value conversion
- Derived type detection
- Rich type information
- Hacking on DbgShell
- DbgEngWrapper
DbgShell - A PowerShell Front-End For The Windows Debugger Engine
Reviewed by Lydecker Black
on
9:03 AM
Rating:
| https://www.kitploit.com/2018/10/dbgshell-powershell-front-end-for.html | CC-MAIN-2018-43 | refinedweb | 926 | 51.89 |
QtQuick.ParentAnimation
Animates changes in parent values More...
Properties
Detailed Description
ParentAnimation is used to animate a parent change for an Item.
For example, the following ParentChange changes
blueRect to become a child of
redRect when it is clicked. The inclusion of the ParentAnimation, which defines a NumberAnimation to be applied during the transition, ensures the item animates smoothly as it moves to its new parent:
import QtQuick 2.0 Item { width: 200; height: 100 Rectangle { id: redRect width: 100; height: 100 color: "red" } Rectangle { id: blueRect x: redRect.width width: 50; height: 50 color: "blue" states: State { name: "reparented" ParentChange { target: blueRect; parent: redRect; x: 10; y: 10 } } transitions: Transition { ParentAnimation { NumberAnimation { properties: "x,y"; duration: 1000 } } } MouseArea { anchors.fill: parent; onClicked: blueRect.state = "reparented" } } }
A ParentAnimation can contain any number of animations. These animations will be run in parallel; to run them sequentially, define them within a SequentialAnimation.
In some cases, such as when reparenting between items with clipping enabled, it is useful to animate the parent change via another item that does not have clipping enabled. Such an item can be set using the via property..
The item to reparent.
When used in a transition, if no target is specified, all ParentChange occurrences are animated by the ParentAnimation.
The item to reparent via. This provides a way to do an unclipped animation when both the old parent and new parent are clipped.
ParentAnimation { target: myItem via: topLevelItem // ... }
Note: This only works when the ParentAnimation is used in a Transition in conjunction with a ParentChange. | https://phone.docs.ubuntu.com/en/apps/api-qml-development/QtQuick.ParentAnimation | CC-MAIN-2021-04 | refinedweb | 257 | 56.45 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.