text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Kubernetes offers several features that can be used together to create self-healing applications in a variety of scenarios. In this lab, you will be able to practice your skills at using features such as probes and restart policies to create a container application that is automatically healed when it stops working.
Learning Objectives
Successfully complete this lab by achieving the following learning objectives:
- Set a Restart Policy to Restart the Container When It Is Down
Find the
beebox-shipping-datapod located in the
defaultnamespace. Modify this pod so its
restartPolicywill restart the container whenever it fails.
Note: You may need to delete and re-create the pod in order to make this change.
- Create a Liveness Probe to Detect When the Application Has Crashed
Add a liveness probe to the container that checks the container status by making an HTTP request to the container every
5seconds. The request should check the
/(root) path on port
8080.
Note: You may need to delete and re-create the pod in order to make this change. | https://acloudguru.com/hands-on-labs/building-self-healing-containers-in-kubernetes | CC-MAIN-2022-33 | refinedweb | 174 | 50.06 |
A resource which streams data from memory. More...
#include <Wt/WMemoryResource>
A resource which streams data from memory.
Use this resource if you want to serve resource data from memory. This is suitable for relatively small resources, which still require some computation.
If creating the data requires computation which you would like to post-pone until the resource is served, then you may want to directly reimplement WResource instead and compute the data on the fly while streaming.
Usage examples:
Creates a new resource.
You must call setMimeType() and setData() before using the.
Sets new data for the resource to serve.
Sets the data from using the first
count bytes from the C-style
data array. | https://www.webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WMemoryResource.html | CC-MAIN-2018-09 | refinedweb | 116 | 67.15 |
Load Visual C++ 2005Start by loading the Visual C++ IDE. You can also see how to do this with Visual C++ or Turbo C++ Explorer.
From the File menu, click New and select Project. Select Win 32 on the Project Types tree. In the name box, enter example1. You can change the location to wherever you want to store the examples. It defaults to a folder within Documents and Settings. This folder holds the source code and project files. If you want a new directory leave the Create directory for solution checkbox ticked.
Click OK. You will now see the Win 32 Application Wizard. You can press Finish or click Next to see the Application Settings. For example 1 we can keep the default settings.
Now type in a couple of lines so you end up with this. Press Ctrl + s (ie Hold the ctrl key down and press the s key) and this will save the file. You can also click File, Save example1.cpp or click the single floppy disk image just below and between the View and Project menu commands.
// example1.cpp : main project file.Press the F7 and this will compile. Before you run it, click the line
#include "stdafx.h"
#include <iostream>
using namespace std;
int main(void)
{
cout << "Hello World";
return 0;
}
return 0;and press the F9 Key. You can run it by pressing F5. It will open a console window and output "Hello World". Press F5 and it will continue running and exit. Without the Break Point, it would just open the Windows, print Hello World and then close too fast to notice.
We'll use these same settings for all the future lessons in the C++ tutorials so make a note of them.
On the next page : A line by line breakdown of Example 1. | http://cplus.about.com/od/learning1/ss/clessonone.htm | crawl-002 | refinedweb | 304 | 85.28 |
Hi,
I am using sikuli for automating a game. The full code is pretty long but the part that moves / clicks mouse is as follows:-
class click_thread(
def __init__(self, reg_to_click):
def run(self):
def click_func(
while True:
if auto_mouse == 'go':
break
else:
wait(1)
click_lock = threading.Lock()
# auto_mouse is a global variable I use to 'pause' mouse actions if reqd by reassigning it from elsewhere
I am using threading to take care of mouse actions. Other functions in the code create a thread instance whenever they need the mouse to click something on the screen in the following way:-
The program slows down as it runs, resulting in a lot of 'timeouts' in the game i use it on. When i check logs, the most frequent entry seems to be 'MouseDown: extended delay'. I have tried running it both from IDE and the command line. It appears to make no difference to this problem. Any idea what might be causing this and what can be done to rectify it?
My env:-
OS:- Microsoft Windows 10 Home, 64-bit
Java:- Version 8, Update 151, build 1.8.0_151-b12
Sikuli: 1.1.1
Thanks in Advance.
Shekhar
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2017-12-03
- Last reply:
- 2017-12-04
--- None of these settings make really sense:
--- click using a thread:
not needed with SikuliX 1.1.1+, since internally mouse actions are globally synchronized
--- extended delay
Internally SikuliX uses the Java AWT Robot class to handle mouse actions (Button up/down and move). All higher level mouse actions are composed based on these Robot features.
The error message comes up, when inside the Robot a feature like ButtonDown does not come back after a maximum delay (standard 1 second). This extended delay is not created by any SikuliX feature, but is completely in the layer between the Robot and the system mouse handling.
Since you are automating a game, it might well be, that the game engine is aware of the fact that the mouse actions are non-human and hence interferes the mouse handling with some delay, which seems probable looking at the continuous increase of the delay.
You might try, to implement your central click function using mouseDown() and mouseUp() with surrounding and intermediate short waits, that are randomly generated. This may help, to avoid being detected as a non-human.
I tried to carry out the clicks via a function instead of using multi threading to see if it makes any difference:-
def click_func(
reg_to_ click): MoveMouseDelay = 0.1 DelayBeforeMous eDown = 0.1 ClickDelay = 0.1 reg_to_ click.getCenter ()) (time.asctime( time.localtime( time.time( )))),"click func recd call - waiting 1s")
while True:
if auto_mouse == 'go':
Settings.
Settings.
Settings.
click(
mouseMove(0,50)
break
else:
print(
wait(1)
All other functions in the script now call this function to get any clicking done. I am still experiencing the script slowing down gradually. Below are the latest logs:-
[debug] MouseDown: extended delay: 1026
[debug] MouseDown: extended delay: 1176
[debug] MouseDown: extended delay: 1493
[debug] MouseDown: extended delay: 1584
[debug] MouseDown: extended delay: 1658
[debug] MouseDown: extended delay: 3077
[debug] MouseDown: extended delay: 3248
[debug] MouseDown: extended delay: 3394
[debug] MouseDown: extended delay: 5769
[debug] MouseUp: extended delay: 1083
[debug] MouseUp: extended delay: 1094
[debug] MouseUp: extended delay: 1159
[debug] MouseUp: extended delay: 1263
[debug] MouseUp: extended delay: 1394
[debug] MouseUp: extended delay: 2123
Any advise / suggestions would be much appreciated on how to solve this problem.
Thanks | https://answers.launchpad.net/sikuli/+question/661348 | CC-MAIN-2017-51 | refinedweb | 594 | 50.26 |
cover an issue that arises from having reference variables in the heap and how to fix it using ICloneable.
A Copy Is Not A Copy.
To clearly define the problem, let's examine what happens when there is a value type on the heap versus having a reference type on the heap. First we'll look at the value type. Take the following class and struct. We have a Dude class which contains a Name element and two Shoe(s). We have a CopyDude() method to make it easier to make new Dudes.
public struct Shoe{
public string Color;
}
public class Dude
{
public string Name;
public Shoe RightShoe;
public Shoe LeftShoe;
public Dude CopyDude()
{
Dude newPerson = new Dude();
newPerson.Name = Name;
newPerson.LeftShoe = LeftShoe;
newPerson.RightShoe = RightShoe;
return newPerson;
}
public override string ToString()
return (Name + " : Dude!, I have a " + RightShoe.Color +
" shoe on my right foot, and a " +
LeftShoe.Color + " on my left foot.");
Our Dude class is a variable type and because the Shoe struct is a member element of the class they both end up on the heap.
When we run the following method:
public static void Main()
Class1 pgm = new Class1();
Dude Bill = new Dude();
Bill.Name = "Bill";
Bill.LeftShoe = new Shoe();
Bill.RightShoe = new Shoe();
Bill.LeftShoe.Color = Bill.RightShoe.Color = "Blue";
Dude Ted = Bill.CopyDude();
Ted.Name = "Ted";
Ted.LeftShoe.Color = Ted.RightShoe.Color = "Red";
Console.WriteLine(Bill.ToString());
Console.WriteLine(Ted.ToString());
We get the expected output:
Bill : Dude!, I have a Blue shoe on my right foot, and a Blue on my left foot.Ted : Dude!, I have a Red shoe on my right foot, and a Red on my left foot.
What happens if we make the Shoe a reference type? Herein lies the problem. If we change the Shoe to a reference type as follows:
public class Shoe{
and run the exact same code in Main(), look how our input changes:
Bill : Dude!, I have a Red shoe on my right foot, and a Red on my left footTed : Dude!, I have a Red shoe on my right foot, and a Red on my left foot
The Red shoe is on the other foot. This is clearly an error. Do you see why it's happening? Here's what we end up with in the heap.
Because we now are using Shoe as a reference type instead of a value type and when the contents of a reference type are copied only the pointer is copied (not the actual object being pointed to), we have to do some extra work to make our Shoe reference type behave more like a value type.
Luckily, we have an interface that will help us out: ICloneable. This interface is basically a contract that all Dudes will agree to and defines how a reference type is duplicated in order to avoid our "shoe sharing" error. All of our classes that need to be "cloned" should use the ICloneable interface, including the Shoe class.
ICloneable consists of one method: Clone()
public object Clone()
{
}
Here's how we'll implement it in the Shoe class:
public class Shoe : ICloneable
{
public string Color;
#region ICloneable Members
{
Shoe newShoe = new Shoe();
newShoe.Color = Color.Clone() as string;
return newShoe;
}
#endregion
}
Inside the Cone() method, we just make a new Shoe, clone all the reference types and copy all the value types and return the new object. You probably noticed that the string class already implements ICloneable so we can call Color.Clone(). Because Clone() returns a reference to an object, we have to "retype" the reference before we can set the Color of the shoe.
Next, in our CopyDude() method we need to clone the shoes instead of copying them
newPerson.LeftShoe = LeftShoe.Clone() as Shoe;
newPerson.RightShoe = RightShoe.Clone() as Shoe;
Now, when we run main:
Dude Bill = new Dude();
Console.WriteLine(Bill.ToString());
We get:
Bill : Dude!, I have a Blue shoe on my right foot, and a Blue on my left footTed : Dude!, I have a Red shoe on my right foot, and a Red on my left foot
Which is what we want.
Wrapping Things Up.
So as a general practice, we want to always clone reference types and copy value types. (It will reduce the amount of aspirin you will have to purchase to manage the headaches you get debugging these kinds of errors.)
So in the spirit of headache reduction, let's take it one step further and clean up the Dude class to implement ICloneable instead of using the CopyDude() method.
public class Dude: ICloneable
public string Name;
return (Name + " : Dude!, I have a " + RightShoe.Color +
}
Dude newPerson = new Dude();
newPerson.Name = Name.Clone() as string;
newPerson.LeftShoe = LeftShoe.Clone() as Shoe;
newPerson.RightShoe = RightShoe.Clone() as Shoe;
return newPerson;
And we'll change the Main() method to use Dude.Clone()
Dude Ted = Bill.Clone() as Dude;
Console.WriteLine(Ted.ToString());
And our final output is:
So all is well.
Something interesting to note is that the assignment operator (the "=" sign) for the System.String class actually clones the string so you don't have to worry about duplicate references. However you do have to watch our for memory bloating. If you look back at the diagrams, because the string is a reference type it really should be a pointer to another object in the heap, but for simplicity's sake, it's shown as a value type.
In Conclusion.
As a general practice, if we plan on ever copying of our objects, we should implement (and use) ICloneable. This enables our reference types to somewhat mimic the behavior of a value type. As you can see, it is very important to keep track of what type of variable we are dealing with because of differences in how the memory is allocated for value types and reference types.
In the next article, we'll look at a way to reduce our code "footprint" in memory.
Until then,
Happy coding.
Part I | Part II | Part III | Part IV
Software Architect, Team Lead, Engineer. Twitter: @Matt_Cochran
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/rmcochran/chsarp_memory401152006094206AM/chsarp_memory4.aspx | CC-MAIN-2015-32 | refinedweb | 1,022 | 74.9 |
NAME
BIO_ADDR, BIO_ADDR_new, BIO_ADDR_clear, BIO_ADDR_free, BIO_ADDR_rawmake, BIO_ADDR_family, BIO_ADDR_rawaddress, BIO_ADDR_rawport, BIO_ADDR_hostname_string, BIO_ADDR_service_string, BIO_ADDR_path_string - BIO_ADDR routines
SYNOPSIS
#include <sys/types.h> #include <openssl/bio.h> typedef union bio_addr_st BIO_ADDR; BIO_ADDR *BIO_ADDR_new(void); void BIO_ADDR_free(BIO_ADDR *); void BIO_ADDR_clear(BIO_ADDR *ap); int BIO_ADDR_rawmake(BIO_ADDR *ap, int family, const void *where, size_t wherelen, unsigned short port); int BIO_ADDR_family(const BIO_ADDR *ap); int BIO_ADDR_rawaddress(const BIO_ADDR *ap, void *p, size_t *l); unsigned short BIO_ADDR_rawport(const BIO_ADDR *ap); char *BIO_ADDR_hostname_string(const BIO_ADDR *ap, int numeric); char *BIO_ADDR_service_string(const BIO_ADDR *ap, int numeric); char *BIO_ADDR_path_string(const BIO_ADDR *ap);
DESCRIPTION
The BIO_ADDR type is a wrapper around all types of socket addresses that OpenSSL deals with, currently transparently supporting AF_INET, AF_INET6 and AF_UNIX according to what's available on the platform at hand.
BIO_ADDR_new() creates a new unfilled BIO_ADDR, to be used with routines that will fill it with information, such as BIO_accept_ex().
BIO_ADDR_free() frees a BIO_ADDR created with BIO_ADDR_new().
BIO_ADDR_clear() clears any data held within the provided BIO_ADDR and sets it back to an uninitialised state.
BIO_ADDR_rawmake() takes a protocol family, an byte array of size wherelen with an address in network byte order pointed at by where and a port number in network byte order in port (except for the AF_UNIX protocol family, where port is meaningless and therefore ignored) and populates the given BIO_ADDR with them. In case this creates a AF_UNIX BIO_ADDR, wherelen is expected to be the length of the path string (not including the terminating NUL, such as the result of a call to strlen()). Read on about the addresses in "RAW ADDRESSES" below.
BIO_ADDR_family() returns the protocol family of the given BIO_ADDR. The possible non-error results are one of the constants AF_INET, AF_INET6 and AF_UNIX. It will also return AF_UNSPEC if the BIO_ADDR has not been initialised.
BIO_ADDR_rawaddress() will write the raw address of the given BIO_ADDR in the area pointed at by p if p is non-NULL, and will set *l to be the amount of bytes the raw address takes up if l is non-NULL. A technique to only find out the size of the address is a call with p set to NULL. The raw address will be in network byte order, most significant byte first. In case this is a AF_UNIX BIO_ADDR, l gets the length of the path string (not including the terminating NUL, such as the result of a call to strlen()). Read on about the addresses in "RAW ADDRESSES" below.
BIO_ADDR_rawport() returns the raw port of the given BIO_ADDR. The raw port will be in network byte order.
BIO_ADDR_hostname_string() returns a character string with the hostname of the given BIO_ADDR. If numeric is 1, the string will contain the numerical form of the address. This only works for BIO_ADDR of the protocol families AF_INET and AF_INET6. The returned string has been allocated on the heap and must be freed with OPENSSL_free().
BIO_ADDR_service_string() returns a character string with the service name of the port of the given BIO_ADDR. If numeric is 1, the string will contain the port number. This only works for BIO_ADDR of the protocol families AF_INET and AF_INET6. The returned string has been allocated on the heap and must be freed with OPENSSL_free().
BIO_ADDR_path_string() returns a character string with the path of the given BIO_ADDR. This only works for BIO_ADDR of the protocol family AF_UNIX. The returned string has been allocated on the heap and must be freed with OPENSSL_free().
RAW ADDRESSES
Both BIO_ADDR_rawmake() and BIO_ADDR_rawaddress() take a pointer to a network byte order address of a specific site. Internally, those are treated as a pointer to struct in_addr (for AF_INET), struct in6_addr (for AF_INET6) or char * (for AF_UNIX), all depending on the protocol family the address is for.
RETURN VALUES
The string producing functions BIO_ADDR_hostname_string(), BIO_ADDR_service_string() and BIO_ADDR_path_string() will return NULL on error and leave an error indication on the OpenSSL error stack.
All other functions described here return 0 or NULL when the information they should return isn't available.
SEE ALSO
BIO_connect(3), BIO_s_connect(3)
Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/manmaster/man3/BIO_ADDR_new.html | CC-MAIN-2018-13 | refinedweb | 702 | 52.8 |
AnyEvent::Handle::ZeroMQ - Integrate AnyEvent and ZeroMQ with AnyEvent::Handle like ways.
Version 0.09
use AnyEvent::Handle::ZeroMQ; use AE; use ZeroMQ; my $ctx = ZeroMQ::Context->new; my $socket = $ctx->socket(ZMQ_XREP); $socket->bind('tcp://0:8888'); my $hdl = AnyEvent::Handle::ZeroMQ->new( socket => $socket, on_drain => sub { print "the write queue is empty\n" }, on_error => sub { my($error_msg) = @_; ... }, # catch errors when occured in the reading callback ); # or $hdl->on_drain( sub { ... } ); # or $hdl->on_error( sub { ... } ); $hdl->push_read( sub { my($hdl, $data) = @_; my @out; while( defined( my $msg = shift @$data ) ) { push @out, $msg; last if $msg->size == 0; } while( my $msg = shift @$data ) { print "get: ",$msg->data,$/; } push @out, "get!"; $hdl->push_write(\@out); } ); AE::cv->recv;
There is also a module called AnyEvent::ZeroMQ in CPAN.
AnyEvent::ZeroMQ::* is a huge, heavy, and full-functioned framework, but this module is a simple, lightweight library with less dependency, and runs faster.
So this module is only occupy a smaller namespace under AnyEvent::Handle::
This module and AnyEvent::ZeroMQ::* are not replacable to each other.
Cindy Wang (CindyLinz)
Please report any bugs or feature requests to
bug-anyevent-handle-zerom::Handle::ZeroMQ
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
See for more information. | http://search.cpan.org/~cindy/AnyEvent-Handle-ZeroMQ-0.09/lib/AnyEvent/Handle/ZeroMQ.pm | CC-MAIN-2017-09 | refinedweb | 236 | 51.58 |
Derive handler for
fintype instances #
This file introduces a derive handler to automatically generate
fintype
instances for structures and inductives.
Implementation notes #
To construct a fintype instance, we need 3 things:
- A list
lof elements
- A proof that
lhas no duplicates
- A proof that every element in the type is in
l
Now fintype is defined as a finset which enumerates all elements, so steps (1) and (2) are bundled together. It is possible to use finset operations that remove duplicates to avoid the need to prove (2), but this adds unnecessary functions to the constructed term, which makes it more expensive to compute the list, and it also adds a dependence on decidable equality for the type, which we want to avoid.
Because we will rely on fintype instances for constructor arguments, we can't actually build a list directly, so (1) and (2) are necessarily somewhat intertwined. The inductive types we will be proving instances for look something like this:
The list of elements that we generate is
{foo.zero} ∪ (finset.univ : bool).map (λ b, finset.one b) ∪ (finset.univ : Σ' x : fin 3, bar x).map (λ ⟨x, y⟩, finset.two x y)
except that instead of
∪, that is
finset.union, we use
finset.disj_union which doesn't
require any deduplication, but does require a proof that the two parts of the union are disjoint.
We use
finset.cons to append singletons like
foo.zero.
The proofs of disjointness would be somewhat expensive since there are quadratically many of them, so instead we use a "discriminant" function. Essentially, we define
def foo.enum : foo → ℕ | foo.zero := 0 | (foo.one _) := 1 | (foo.two _ _) := 2
and now the existence of this function implies that foo.zero is not foo.two and so on because they
map to different natural numbers. We can prove that sets of natural numbers are mutually disjoint
more easily because they have a linear order:
0 < 1 < 2 so
0 ≠ 2.
To package this argument up, we define
finset_above foo foo.enum n to be a finset
s together
with a proof that all elements
a ∈ s have
n ≤ enum a. Now we only have to prove that
enum foo.zero = 0,
enum (foo.one _) = 1, etc. (linearly many proofs, all
rfl) in order to
prove that all variants are mutually distinct.
We mirror the
finset.cons and
finset.disj_union functions into
finset_above.cons and
finset_above.union, and this forms the main part of the finset construction.
This only handles distinguishing variants of a finset. Now we must enumerate the elements of a
variant, for example
{foo.one ff, foo.one tt}, while at the same time proving that all these
elements have discriminant
1 in this case. To do that, we use the
finset_in type, which
is a finset satisfying a property
P, here
λ a, foo.enum a = 1.
We could use
finset.bind many times to construct the finset but it turns out to be somewhat
complicated to get good side goals for a naturally nodup version of
finset.bind in the same way
as we did with
finset.cons and
finset.union. Instead, we tuple up all arguments into one type,
leveraging the
fintype instance on
psigma, and then define a map from this type to the
inductive type that untuples them and applies the constructor. The injectivity property of the
constructor ensures that this function is injective, so we can use
finset.map to apply it. This
is the content of the constructor
finset_in.mk.
That completes the proofs of (1) and (2). To prove (3), we perform one case analysis over the inductive type, proving theorems like
foo.one a ∈ {foo.zero} ∪ (finset.univ : bool).map (λ b, finset.one b) ∪ (finset.univ : Σ' x : fin 3, bar x).map (λ ⟨x, y⟩, finset.two x y)
by seeking to the relevant disjunct and then supplying the constructor arguments. This part of the
proof is quadratic, but quite simple. (We could do it in
O(n log n) if we used a balanced tree
for the unions.)
The tactics perform the following parts of this proof scheme:
mk_sigmaconstructs the type
Γin
finset_in.mk
mk_sigma_elimconstructs the function
fin
finset_in.mk
mk_sigma_elim_injproves that
fis injective
mk_sigma_elim_eqproves that
∀ a, enum (f a) = k
mk_finsetconstructs the finset
S = {foo.zero} ∪ ...by recursion on the variants
mk_finset_totalconstructs the proof
|- foo.zero ∈ S; |- foo.one a ∈ S; |- foo.two a b ∈ Sby recursion on the subgoals coming out of the initial
cases
mk_fintype_instanceputs it all together to produce a proof of
fintype foo. The construction of
foo.enumis also done in this function.
A step in the construction of
finset.univ for a finite inductive type.
We will set
enum to the discriminant of the inductive type, so a
finset_above
represents a finset that enumerates all elements in a tail of the constructor list.
Equations
- derive_fintype.finset_above α enum n = {s // ∀ (x : α), x ∈ s → n ≤ enum x}
Construct a fintype instance from a completed
finset_above.
This is the case for a simple variant (no arguments) in an inductive type.
Equations
- derive_fintype.finset_above.cons n a h s = ⟨finset.cons a s.val _, _⟩
The base case is when we run out of variants; we just put an empty finset at the end.
Equations
Equations
This is a finset covering a nontrivial variant (with one or more constructor arguments).
The property
P here is
λ a, enum a = n where
n is the discriminant for the current
variant.
Equations
- derive_fintype.finset_in P = {s // ∀ (x : α), x ∈ s → P x}
To construct the finset, we use an injective map from the type
Γ, which will be the
sigma over all constructor arguments. We use sigma instances and existing fintype instances
to prove that
Γ is a fintype, and construct the function
f that maps
⟨a, b, c, ...⟩
to
C_n a b c ... where
C_n is the nth constructor, and
mem asserts
enum (C_n a b c ...) = n.
Equations
- derive_fintype.finset_in.mk Γ f inj mem = ⟨finset.map {to_fun := f, inj' := inj} finset.univ, _⟩
For nontrivial variants, we split the constructor list into a
finset_in component for the
current constructor and a
finset_above for the rest.
Equations
- derive_fintype.finset_above.union n s t = ⟨s.val.disj_union t.val _, _⟩ | https://leanprover-community.github.io/mathlib_docs/tactic/derive_fintype.html | CC-MAIN-2022-05 | refinedweb | 1,051 | 68.36 |
In the general sense of the word, a schema is a generic representation of a class of things. For example, a schema for restaurant menus could be the phrase "a list of dishes available at a particular eating establishment." A schema may resemble the thing it describes, the way a "smiley face" represents an actual human face. The information contained in a schema allows you to identify when something is or is not a representative instance of the concept.
In the XML context, a schema is a pass-or-fail test for documents.[1] A document that passes the test is said to conform to it, or be valid. Testing a document with a schema is called validation. A schema ensures that a document fulfills a minimum set of requirements, finding flaws that could result in anomalous processing. It also may serve as a way to formalize an application, being a publishable object that describes a language in unambiguous rules.
[1] Technically, schemas validate on an element-by-element and attribute-by-attribute basis. It is possible to test a subtree alone for validity and determine that parts are valid while others are not. This process is rather complex and beyond the scope of this book.
An XML schema is like a program that tells a processor how to read a document. It's very similar to a later topic we'll discuss called transformations. The processor reads the rules and declarations in the schema and uses this information to build a specific type of parser, called a validating parser. The validating parser takes an XML instance as input and produces a validation report as output. At a minimum, this report is a return code, true if the document is valid, false otherwise. Optionally, the parser can create a Post Schema Validation Infoset (PSVI) including information about data types and structure that may be used for further processing.
Validation happens on at least four levels:
The use and placement of markup elements and attributes.
Patterns of character data (e.g., numbers, dates, text).
The status of links between nodes and resources.
Miscellaneous tests such as spelling checks, checksum results, and so on.
Structural validation is the most important, and schemas are best prepared to handle this level. Data typing is often useful, especially in "data-style" documents, but not widely supported. Testing integrity is less common and somewhat problematic to define. Business rules are often checked by applications.
There are many different kinds of XML schemas, each with its own strengths and weaknesses.
The oldest and most widely supported schema language is the Document Type Definition (DTD). Borrowed from SGML, a simplified DTD was included in the XML Core recommendation. Though a DTD isn't necessary to read and process an XML document, it can be a useful component for a document, providing the means to define macro-like entities and other conveniences. DTDs were the first widely used method to formally define languages like HTML.
As soon as XML hit the streets, developers began to clamor for an alternative to DTDs. DTDs don't support namespaces, which appeared after the XML 1.0 specification. They also have very weak data typing, being mostly markup-focused. The W3C formed a working group for XML Schema and began to receive proposals for what would later become their W3C XML Schema recommendation.
Following are some of the proposals made by various groups.
Submitted by Arbortext, DataChannel, Inso Corporation, Microsoft, and the University of Edinburgh in January 1998, this technical note put forth many of the features incorporated in W3C Schema, and many others that were left out, such as a mechanism for declaring entities and object-oriented programming support. Microsoft implemented a version of this called XML-Data Reduced (XDR).
IBM, Microsoft, and Textuality submitted this proposal in July 1998 as an attempt to integrate XML-Data with the Resource Description Framework (RDF). It introduced the idea of making elements and attributes interchangeable.
As the name implies, this technical note was influenced by programming needs, incorporating concepts as interfaces and parameters. It was submitted in July 1998 by Veo Systems/Commerce One. They have created an implementation that they use today.
This proposal came out of discussions on the XML-Dev mailing list. It took the information expressed in a DTD and formatted it as XML, leaving support for data types to other specifications.
Informed by these proposals, the W3C XML Schema Working Group arrived at a recommendation in May 2001, composed of three parts (XMLS0, XMLS1, and XMLS2) named Primer, Structures, and Datatypes, respectively. Although some of the predecessors are still in use, all involved parties agreed that they should be retired in favor of the one, true W3C XML Schema.
An independent effort by a creative few coalesced into another schema language called RELAX NG (pronounced "relaxing"). It is the merging of Regular Language Description for XML (RELAX) and Tree Regular Expressions for XML (TREX). Like W3C Schema, it supports namespaces and datatypes. It also includes some unique innovations, such as interchangeability of elements and attributes in content descriptions and more flexible content models.
RELAX, a product of the Japanese Standard Association's INSTAC XML Working Group, led by Murata Makoto, was designed to be an easy alternative to XML Schema. "Tired of complex specifications?" the home page asks. "You can relax!" Unlike W3C Schema, with its broad scope and high learning curve, RELAX is simple to implement and use.
You can think of RELAX as DTDs (formatted in XML) plus datatypes inherited from W3C Schema's datatype set. As a result, it is nearly painless to migrate from DTDs to RELAX and, if you want to do so later, fairly easy to migrate from RELAX to W3C Schemas. It supported two levels of conformance. "Classic" is just like DTD validation plus datatype checking. "Fully relaxed" added more features.
The theoretical basis of RELAX is Hedge Automata tree processing. While you don't need to know anything about Hedge Automata to use RELAX or RELAX NG, these mathematical foundations make it easier to write efficient code implementing RELAX NG. Murata Makoto has demonstrated a RELAX NG implementation which occupies 27K on a cell phone, including both the schema and the XML parser.
At about the same time RELAX was taking shape, James Clark of Thai Opensource Software was developing TREX. It came out of work on XDuce, a typed programming language for manipulating XML markup and data. XDuce (a contraction of "XML" and "transduce") is a transformation language which takes an XML document as input, extracts data, and outputs another document in XML or another format. TREX uses XDuce's type system and adds various features into an XML-based language. XDuce appeared in March 2000, followed by TREX in January 2001.
Like RELAX, TREX uses a very clear and flexible language that is easy to learn, read, and implement. Definitions of elements and attributes are interchangeable, greatly simplifying the syntax. It has full support for namespaces, mixed content, and unordered content, things that are missing from, or very difficult to achieve, with DTDs. Like RELAX, it uses the W3C XML Schema datatype set, reducing the learning curve further.
RELAX NG (new generation) combines the best features from both RELAX and TREX in one XML-based schema language. First announced in May 2001, an OASIS Technical Committee headed by James Clark and Murata Makoto oversees its development. It was approved as a Draft International Standard by the ISO/IEC.
Also worth noting is Schematron, first proposed by Rick Jelliffe of the Academia Sinicia Computing Centre in 1999. It uses XPath expressions to define validation rules and is one of the most flexible schema languages around.
It may seem like schemas are a lot of work, and you'd be right to think so. In designing a schema, you are forced to think hard about how your language is structured. As your language evolves, you have to update your schema, which is like maintaining a piece of software. There will be bugs, version tracking, usability issues, and even the occasional overhaul to consider. So with all this overhead, is it really worth it?
First, let's look at the benefits:
A schema can function as a publishable specification. There is simply no better way to describe a language than with a schema. A schema is, after all, a "yes or no" test for document conformance. It's designed to be readable by humans and machines alike. DTDs are very reminiscent of Backus-Naur Form (BNF) grammars which are used to describe programming languages. Other schemas, such as RELAX NG, are intuitive and very easy to read. So if you need to disseminate information on how to use a markup language, a schema is not a bad way to do it.
A schema will catch higher-level mistakes. Sure, there are well-formedness rules to protect your software from errors in basic syntax, but do they go far enough? What if a required field of information is missing? Or someone has consistently misspelled an element name? Or a date was entered in the wrong format? These are things only a validating parser can detect.
A schema is portable and efficient. Writing a program to test a document is an option, but it may not be the best one. Software can be platform-dependent, difficult to install, and bulky to transfer. A schema, however, is compact and optimized for one purpose: validation. It's easy to hand someone a schema, and you know it has to work for them because its syntax is governed by a standard specification. And since many schemas are based on XML, they can be edited in XML editors and tested by well-formedness checkers.
A schema is extensible. Schemas are designed to support modularity. If you want to maintain a set of similar languages, or versions of them, they can share common components. For example, DTDs allow you to declare general entities for special characters or frequently used text. They may be so useful that you want to export them to other languages.
Using a schema also has some drawbacks:
A schema reduces flexibility. The expressiveness of schemas varies considerably, and each standard tends to have its flaws. For example, DTDs are notorious for their incompatibility with namespaces. They are also inefficient at specifying a content model that contains required children that may appear in any order. While other schema languages improve upon DTDs, they will always have limitations of one sort or another.
Schemas can be obstacles for authors. In spite of advances in XML editors with fancy graphical interfaces, authoring in XML will never be as easy as writing in a traditional word processor. Time spent thinking about which element to use in a given context is time not spent on thinking about the document's content, which is the original reason for writing it. Some editors supply a menu of elements to select from that changes depending on the context. Depending on the language and the tools, it still can be confusing and frustrating for the lay person.
You have to maintain it. With a schema, you have one more tool to debug and update. Like software, it will have bugs, versions, and even its own documentation. It's all too easy to damage a schema by deleting an imported component or introducing a syntax error. Older documents may not validate if you update the schema, forcing you to make retroactive changes to them. One silver lining is that, except for DTDs, most schema languages are based on XML, which allows you to use XML editors to make changes.
Designing it will be hard. Schemas are tricky documents to compose. You have to really think about how each element will fit together, what kinds of data will be input, whether there are special cases to accommodate. If you're just starting out with a language, there are many needs you don't know about until you start using it, creating a bit of a bootstrapping problem.
To make the decision easier, think about it this way. A schema is basically a quality-control tool. If you are reasonably certain that your documents are good enough for processing, then you have no need for schemas. However, if you want extra assurance that your documents are complete and structurally sound, and the work you save fixing mistakes outweighs the work you will spend maintaining a schema, then you should look into it.
One thing to consider is whether a human will be involved with producing a document. No matter how careful we are, we humans tend to make a lot of mistakes. Validation can find those problems and save frustration later. But software-created documents tend to be very predictable and probably never need to be validated.
The really hard question to answer is not whether you need a schema, but which standard to use. There are a few very valuable choices that I will be describing in the rest of the chapter. I hope to provide you with enough information to decide which one is right for your application. | http://etutorials.org/Programming/Learning+xml/Chapter+4.+Quality+Control+with+Schemas/4.1+Basic+Concepts/ | crawl-001 | refinedweb | 2,193 | 64.2 |
Hello,
currently we are trying to export the content from one repository to
another (in this case for migrating to the BundlePersistanceManager).
The problem is, that our repository contains special characters like
#x0d, #x0a, #x04. As of JCR-674 [1] the export seems to work but the
import does not decode the base64 encoded values [2]. Our initial
thought was, to just remove some of these chars. But it doesn't seem to
be possible to create a search query to find these chars. It would just
be good for us to know if this is correct, or if we missed something. We
are depending on preserving the uuids across the repositories so the
only way seems to be JCR export/import.
At the moment we are in a situation where we cannot migrate to the other
PM and cannot use the JCR export/import in general with Jackrabbit. Does
somebody know an other way of fixing such content in the repository or
exporting/importing it? Any hint is much appreciated.
Best regards,
Sandro
[1] - String properties with invalid XML characters export as invalid
XML
[2] - JCR-1228 Support xs:base64Binary values in system view import | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200806.mbox/%3C485003E4.8030208@gmx.de%3E | CC-MAIN-2014-10 | refinedweb | 196 | 62.48 |
Introduction
I have closely monitored the series of Data Hackathons and found an interesting trend (shown below). This trend is based on participant rankings on public and private leader board.
I noticed, that participants who rank higher on public leaderboard, lose their position after their ranks gets validated at private leaderboard. Some even failed to secure rank in top 20s on private leaderboard (image below).
Eventually, I discovered the phenomenon which brings such ripples on the leaderboard.
Take a guess! What could be the possible reason for high variation in ranks? In other words, why does their model lose stability when evaluated on private leaderboard? Let’s look some possible reason.
Why do models lose stability?
Let’s understand this using the snapshot illustrating fit of various models below:
Here, we are trying to find the relationship between size and price. For which, we’ve taken the following steps:
- We’ve established the relationship using a linear equation for which the plots have been shown. First plot has high error from training data points. Therefore, this will not perform well at both public and private leader board. This is an example of “Under fitting”. In this case, our model fails to capture the underlying trend of the data.
- In second plot, we just found the right relationship between price and size i.e. low training error and generalization of relationship
- In third plot, we found a relationship which has almost zero training error. This is because, the relationship is developed by considering each deviation in the data point (including noise) i.e. model is too sensitive and captures random patterns which are present only in the current data set. This is an example of “Over fitting”. In this relationship, there could be high deviation in public and private leader board. of this question, we use cross validation technique. This method helps us to achieve more generalized relationships.
Note: This article is meant for every aspiring data scientist keen to improve his/her performance in data science competitions. In the end, I’ve shared python and R codes for cross validation. In R, I’ve used iris data set for demonstration purpose.
What is Cross Validation?
Cross Validation is a technique which involves reserving a particular sample of a data set on which you do not train the model. Later, you test the model on this sample before finalizing the model.
Here are the steps involved in cross validation:
- You reserve a sample data set.
- Train the model using the remaining part of the data set.
- Use the reserve sample of the data set test (validation) set. This will help you to know the effectiveness of model performance. It your model delivers a positive result on validation data, go ahead with current model. It rocks!
What are common methods used for Cross Validation ?
There are various methods of cross validation. I’ve discussed few of them below:
1. The Validation set Approach
In this approach, we reserve 50% of dataset for validation and rest 50% for model training. After testing the model performance on validation data set. However, a major disadvantage of this approach is that we train a model on 50% of the data set only, whereas, it may be possible that we are leaving some interesting information about data i.e. higher bias.
2. Leave one out cross validation (LOOCV)
In this approach, we reserve only one data-point of the available data set. And, train model on the rest of data set. This process iterates for each data point. It is also known for its advantages and disadvantages. Let’s look at them:
- We make use of all data points, hence low bias.
- We repeat the cross validation process iterates n times(where n is number of data points) which results in higher execution time
- This approach leads to higher variation in testing model effectiveness because we test against one data point. So, our estimation gets highly influenced by the data point. If the data point turns out to be an outlier, it can lead to higher variation.
3. k-fold cross validation
From the above two validation methods, we’ve learnt:
- We should train model on large portion of data set. Else, we’d fail every time to read the underlying trend of data sets. Eventually, resulting in higher bias.
- We also need a good ratio testing data points. As, we have seen that lower data points can lead to variance error while testing the effectiveness of model.
- We should iterate on training and testing process multiple times. We should change the train and test data set distribution. This helps to validate the model effectiveness well.
Do we have a method which takes care of all these 3 requirements ?
Yes! That method is known as “k- fold cross validation”. It’s easy to follow and implement. Here are the quick steps:
- Randomly split your entire dataset into k”folds”.
- For each k folds in your dataset, build your model on k – 1 folds of the data set. how does a k-fold validation work for k=10.
Now, one of most commonly asked question is, “How to choose right value of k?”
Always remember, lower value of K is more biased and hence undesirable. On the other hand,.
How to measure the model’s bias-variance?
After k-fold cross validation, we’ll get k different model estimation errors (e1, e2 …..ek). In ideal scenario, these error values should add to zero. To return the model’s bias, we take the average of all the errors. Lower the average value, better the model.
Similarly for calculating model’ variance, we take standard deviation of all errors. Lower value of standard deviation suggests our model does not vary a lot with different subset of training data.
We should focus on achieving a balance between bias and variance. This can be done by reducing the variance and controlling bias to an extent. It’ll result in better predictive model. This trade-off usually leads to building less complex predictive models.
Python Code
from sklearn import cross_validation model = RandomForestClassifier(n_estimators=100)
#Simple K-Fold cross validation. 10 folds. cv = cross_validation.KFold(len(train), n_folds=10, indices=False)
results = [] # "Error_function" can be replaced by the error function of your analysis for traincv, testcv in cv: probas = model.fit(train[traincv], target[traincv]).predict_proba(train[testcv]) results.append( Error_function )
print "Results: " + str( np.array(results).mean() )
R Code
setwd('C:/Users/manish/desktop/RData')
library(plyr) library(dplyr) library(randomForest)
data <- iris
glimpse(data)
#cross validation, using rf to predict sepal.length k = 5 data$id <- sample(1:k, nrow(data), replace = TRUE) list <- 1:k
# prediction and test set data frames that we add to with each iteration over # the folds prediction <- data.frame() testsetCopy <- data.frame()
#Creating a progress bar to know the status of CV progress.bar <- create_progress_bar("text") progress.bar$init(k)
#function for k fold for(i in 1:k){ # remove rows with id i from dataframe to create training set # select rows with id i to create test set trainingset <- subset(data, id %in% list[-i]) testset <- subset(data, id %in% c(i)) #run a random forest model mymodel <- randomForest(trainingset$Sepal.Length ~ ., data = trainingset, ntree = 100) #remove response column 1, Sepal.Length temp <- as.data.frame(predict(mymodel, testset[,-1])) # append this iteration's predictions to the end of the prediction data frame prediction <- rbind(prediction, temp) # append this iteration's test set to the test set copy data frame # keep only the Sepal Length Column testsetCopy <- rbind(testsetCopy, as.data.frame(testset[,1])) progress.bar$step() }
# add predictions and actual Sepal Length values result <- cbind(prediction, testsetCopy[, 1]) names(result) <- c("Predicted", "Actual") result$Difference <- abs(result$Actual - result$Predicted)
# As an example use Mean Absolute Error as Evalution summary(result$Difference)
End Notes
In this article, we discussed about over fitting and methods like cross-validation to avoid over-fitting. We also looked at different cross-validation methods like Validation Set approach, LOOCV and k-fold cross validation, followed by its code in Python and R.
Did you find this article helpful? Please share your opinions / thoughts in the comments section below. Here is your chance to apply this in our upcoming hackthon – The Black Friday Data Hack
14 Comments | https://www.analyticsvidhya.com/blog/2015/11/improve-model-performance-cross-validation-in-python-r/ | CC-MAIN-2017-26 | refinedweb | 1,386 | 57.67 |
This section covers several query-processing subjects that didn't fit very well into earlier sections of this chapter:
How to use result set data to calculate a result after using result set metadata to help verify that the data are suitable for your calculations
How to deal with data values that are troublesome to insert into queries
How to work with binary data
How to get information about the structure of your tables
Common MySQL programming mistakes and how to avoid them
So far we've concentrated on using result set metadata primarily for printing query rows, but clearly there will be times when you need to do something with a result set besides print it. For example, you can compute statistical information based on the data values, using the metadata to make sure the data conform to requirements you want them to satisfy. What type of requirements? For starters, you'd probably want to verify that a column on which you're planning to perform numeric computations actually contains numbers.
The following listing shows a simple function, summary_stats(), that takes a result set and a column index and produces summary statistics for the values in the column. The function also reports the number of missing values, which it detects by checking for NULL values. These calculations involve two requirements that the data must satisfy, so summary_stats() verifies them using the result set metadata:
The specified column must exist?that is, the column index must be within range of the number of columns in the result set. This range is from 0 to mysql_num_fields()?1.
The column must contain numeric values.
If these conditions do not hold, summary_stats() simply prints an error message and returns. It is implemented as follows:
void summary_stats (MYSQL_RES *res_set, unsigned int col_num) { MYSQL_FIELD *field; MYSQL_ROW row; unsigned int n, missing; double val, sum, sum_squares, var; /* verify data requirements: column must be in range and numeric */ if (col_num < 0 || col_num >= mysql_num_fields (res_set)) { print_error (NULL, "illegal column number"); return; } mysql_field_seek (res_set, col_num); field = mysql_fetch_field (res_set); if (!IS_NUM (field->type)) { print_error (NULL, "column is not numeric"); return; } /* calculate summary statistics */ n = 0; missing = 0; sum = 0; sum_squares = 0; mysql_data_seek (res_set, 0); while ((row = mysql_fetch_row (res_set)) != NULL) { if (row[col_num] == NULL) missing++; else { n++; val = atof (row[col_num]); /* convert string to number */ sum += val; sum_squares += val * val; } } if (n == 0) printf ("No observations\n"); else { printf ("Number of observations: %lu\n", n); printf ("Missing observations: %lu\n", missing); printf ("Sum: %g\n", sum); printf ("Mean: %g\n", sum / n); printf ("Sum of squares: %g\n", sum_squares); var = ((n * sum_squares) - (sum * sum)) / (n * (n - 1)); printf ("Variance: %g\n", var); printf ("Standard deviation: %g\n", sqrt (var)); } }
Note the call to mysql_data_seek() that precedes the mysql_fetch_row() loop. It positions to the first row of the result set, which is useful in case you want to call summary_stats() multiple times for the same result set (for example, to calculate statistics on several different columns). The effect is that each time summary_stats() is invoked, it "rewinds" to the beginning of the result set. The use of mysql_data_seek() requires that you create the result set with mysql_store_result(). If you create it with mysql_use_result(), you can only process rows in order, and you can process them only once.
summary_stats() is a relatively simple function, but it should give you an idea of how you could program more complex calculations, such as a least-squares regression on two columns or standard statistics such as a t-test or an analysis of variance.
If inserted literally into a query, data values containing quotes, nulls, or backslashes can cause problems when you try to execute the query. The following discussion describes the nature of the difficulty and how to solve it.
Suppose you want to construct a SELECT query based on the contents of the null-terminated string pointed to by the name_val variable:
char query[1024]; sprintf (query, "SELECT * FROM mytbl WHERE name='%s'", name_val);
If the value of name_val is something like O'Malley, Brian, the resulting query is illegal because a quote appears inside a quoted string:
SELECT * FROM mytbl WHERE name='O'Malley, Brian'
You need to treat the quote specially so that the server doesn't interpret it as the end of the name. The ANSI SQL convention for doing this is to double the quote within the string. MySQL understands that convention and also allows the quote to be preceded by a backslash, so you can write the query using either of the following formats:
SELECT * FROM mytbl WHERE name='O''Malley, Brian' SELECT * FROM mytbl WHERE name='O\'Malley, Brian'
Another problematic situation involves the use of arbitrary binary data in a query. This happens, for example, in applications that store images in a database. Because a binary value can contain any character (including quotes or backslashes), it cannot be considered safe to put into a query as is.
To deal with this problem, use mysql_real_escape_string(), which encodes special characters to make them usable in quoted strings. Characters that mysql_real_escape_string() considers special are the null character, single quote, double quote, backslash, newline, carriage return, and Ctrl-Z. (The last one is special on Windows, where it often signifies end-of-file.)
When should you use mysql_real_escape_string()? The safest answer is "always." However, if you're sure of the format of your data and know that it's okay?perhaps because you have performed some prior validation check on it?you need not encode it. For example, if you are working with strings that you know represent legal phone numbers consisting entirely of digits and dashes, you don't need to call mysql_real_escape_string(). Otherwise, you probably should.
mysql_real_escape_string() encodes problematic characters by turning them into 2-character sequences that begin with a backslash. For example, a null byte becomes '\0', where the '0' is a printable ASCII zero, not a null. Backslash, single quote, and double quote become '\\', '\'', and '\"'.
To use mysql_real_escape_string(), invoke it as follows:
to_len = mysql_real_escape_string (conn, to_str, from_str, from_len);
mysql_real_escape_string() encodes from_str and writes the result into to_str. It also adds a terminating null, which is convenient because you can use the resulting string with functions such as strcpy(), strlen(), or printf().
from_str points to a char buffer containing the string to be encoded. This string can contain anything, including binary data. to_str points to an existing char buffer where you want the encoded string to be written; do not pass an uninitialized or NULL pointer, expecting mysql_real_escape_string() to allocate space for you. The length of the buffer pointed to by to_str must be at least (from_len*2)+1 bytes long. (It's possible that every character in from_str will need encoding with two characters; the extra byte is for the terminating null.)
from_len and to_len are unsigned long values. from_len indicates the length of the data in from_str; it's necessary to provide the length because from_str may contain null bytes and cannot be treated as a null-terminated string. to_len, the return value from mysql_real_escape_string(), is the actual length of the resulting encoded string, not counting the terminating null.
When mysql_real_escape_string() returns, the encoded result in to_str can be treated as a null-terminated string because any nulls in from_str are encoded as the printable '\0' sequence.
To rewrite the SELECT-constructing code so that it works even for name values that contain quotes, we could do something like the following:
char query[1024], *p; p = strcpy (query, "SELECT * FROM mytbl WHERE name='"); p += strlen (p); p += mysql_real_escape_string (conn, p, name, strlen (name)); *p++ = '\''; *p = '\0';
Yes, that's ugly. If you want to simplify the code a bit, at the cost of using a second buffer, do the following instead:
char query[1024], buf[1024]; (void) mysql_real_escape_string (conn, buf, name, strlen (name)); sprintf (query, "SELECT * FROM mytbl WHERE name='%s'", buf);
mysql_real_escape_string() is unavailable prior to MySQL 3.23.14. As a workaround, you can use mysql_escape_string() instead:
to_len = mysql_escape_string (to_str, from_str, from_len);
The difference between them is that mysql_real_escape_string() uses the character set for the current connection to perform encoding. mysql_escape_string() uses the default character set (which is why it doesn't take a connection handler argument). To write source that will compile under any version of MySQL, include the following code fragment in your file:
#if !defined(MYSQL_VERSION_ID) || (MYSQL_VERSION_ID<32314) #define mysql_real_escape_string(conn,to_str,from_str,len) \ mysql_escape_string(to_str,from_str,len) #endif
Then write your code in terms of mysql_real_escape_string(); if that function is unavailable, the #define causes it to be mapped to mysql_escape_string() instead.
One of the jobs for which mysql_real_escape_string() is essential involves loading image data into a table. This section shows how to do it. (The discussion applies to any other form of binary data as well.)
Suppose you want to read images from files and store them in a table named picture along with a unique identifier. The BLOB type is a good choice for binary data, so you could use a table specification like this:
CREATE TABLE picture ( pict_id INT NOT NULL PRIMARY KEY, pict_data BLOB );
To actually get an image from a file into the picture table, the following function, load_image(), does the job, given an identifier number and a pointer to an open file containing the image data:
int load_image (MYSQL *conn, int id, FILE *f) { char query[1024*100], buf[1024*10], *p; unsigned long from_len; int status; sprintf (query, "INSERT INTO picture (pict_id,pict_data) VALUES (%d,'", id); p = query + strlen (query); while ((from_len = fread (buf, 1, sizeof (buf), f)) > 0) { /* don't overrun end of query buffer! */ if (p + (2*from_len) + 3 > query + sizeof (query)) { print_error (NULL, "image too big"); return (1); } p += mysql_real_escape_string (conn, p, buf, from_len); } *p++ = '\''; *p++ = ')'; status = mysql_real_query (conn, query, (unsigned long) (p - query)); return (status); }
load_image() doesn't allocate a very large query buffer (100KB), so it works only for relatively small images. In a real-world application, you might allocate the buffer dynamically based on the size of the image file.
Getting an image value (or any binary value) back out of a database isn't nearly as much of a problem as putting it in to begin with because the data value is available in raw form in the MYSQL_ROW variable, and the length is available by calling mysql_fetch_lengths(). Just be sure to treat the value as a counted string, not as a null-terminated string.
MySQL allows you to get information about the structure of your tables, using any of the following queries (which are equivalent):
SHOW COLUMNS FROM tbl_name; SHOW FIELDS FROM tbl_name; DESCRIBE tbl_name; EXPLAIN tbl_name;
Each statement is like SELECT in that it returns a result set. To find out about the columns in the table, all you need to do is process the rows in the result to pull out the information you want. For example, if you issue a DESCRIBE president statement using the mysql client, it returns the following information:
mysql> DESCRIBE president; +------------+-------------+------+-----+------------+-------+ | Field | Type | Null | Key | Default | Extra | +------------+-------------+------+-----+------------+-------+ | last_name | varchar(15) | | | | | | first_name | varchar(15) | | | | | | suffix | varchar(5) | YES | | NULL | | | city | varchar(20) | | | | | | state | char(2) | | | | | | birth | date | | | 0000-00-00 | | | death | date | YES | | NULL | | +------------+-------------+------+-----+------------+-------+
If you execute the same query from your own client program, you get the same information (without the boxes).
If you want information only about a single column, add the column name:
mysql> DESCRIBE president birth; +-------+------+------+-----+------------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+------+------+-----+------------+-------+ | birth | date | | | 0000-00-00 | | +-------+------+------+-----+------------+-------+
This section discusses some common MySQL C API programming errors and how to avoid them. (These problems crop up periodically on the MySQL mailing list; I'm not making them up.)
The examples shown earlier in this chapter invoke mysql_init() with a NULL argument. That tells mysql_init() to allocate and initialize a MYSQL structure and return a pointer to it. Another approach is to pass a pointer to an existing MYSQL structure. In this case, mysql_init() will initialize that structure and return a pointer to it without allocating the structure itself. If you want to use this second approach, be aware that it can lead to certain subtle difficulties. The following discussion points out some problems to watch out for.
If you pass a pointer to mysql_init(), it must actually point to something. Consider the following piece of code:
main () { MYSQL *conn; mysql_init (conn); ... }
The problem is that mysql_init() receives a pointer, but that pointer doesn't point anywhere sensible. conn is a local variable and thus is uninitialized storage that can point anywhere when main() begins execution. That means mysql_init() will use the pointer and scribble on some random area of memory. If you're lucky, conn will point outside your program's address space and the system will terminate it immediately so that you'll realize that the problem occurs early in your code. If you're not so lucky, conn will point into some data that you don't use until later in your program, and you won't notice a problem until your program actually tries to use that data. In that case, your problem will appear to occur much farther into the execution of your program than where it actually originates and may be much more difficult to track down.
Here's a similar piece of problematic code:
MYSQL *conn; main () { mysql_init (conn); mysql_real_connect (conn, ...) mysql_query(conn, "SHOW DATABASES"); ... }
In this case, conn is a global variable, so it's initialized to 0 (that is, to NULL) before the program starts up. mysql_init() sees a NULL argument, so it initializes and allocates a new connection handler. Unfortunately, the value of conn remains NULL because no value is ever assigned to it. As soon as you pass conn to a MySQL C API function that requires a non-NULL connection handler, your program will crash. The fix for both pieces of code is to make sure conn has a sensible value. For example, you can initialize it to the address of an already-allocated MYSQL structure:
MYSQL conn_struct, *conn = &conn_struct; ... mysql_init (conn);
However, the recommended (and easier!) solution is simply to pass NULL explicitly to mysql_init(), let that function allocate the MYSQL structure for you, and assign conn the return value:
MYSQL *conn; ... conn = mysql_init (NULL);
In any case, don't forget to test the return value of mysql_init() to make sure it's not NULL (see Mistake 2).
Remember to check the status of calls that may fail. The following code doesn't do that:
MYSQL_RES *res_set; MYSQL_ROW row; res_set = mysql_store_result (conn); while ((row = mysql_fetch_row (res_set)) != NULL) { /* process row */ }
Unfortunately, if mysql_store_result() fails, res_set is NULL, in which case the while loop should never even be executed. (Passing NULL to mysql_fetch_row() likely will crash the program.) Test the return value of functions that return result sets to make sure you actually have something to work with.
The same principle applies to any function that may fail. When the code following a function depends on the success of the function, test its return value and take appropriate action if failure occurs. If you assume success, problems will occur.
Don't forget to check whether column values in the MYSQL_ROW array returned by mysql_fetch_row() are NULL pointers. The following code crashes on some machines if row[i] is NULL:
for (i = 0; i < mysql_num_fields (res_set); i++) { if (i > 0) fputc ('\t', stdout); printf ("%s", row[i]); } fputc ('\n', stdout);
The worst part about this mistake is that some versions of printf() are forgiving and print "(null)" for NULL pointers, which allows you to get away with not fixing the problem. If you give your program to a friend who has a less-forgiving printf(), the program will crash and your friend will conclude that you're a lousy programmer. The loop should be written as follows instead:
for (i = 0; i < mysql_num_fields (res_set); i++) { if (i > 0) fputc ('\t', stdout); printf ("%s", row[i] != NULL ? row[i] : "NULL"); } fputc ('\n', stdout);
The only time you need not check whether a column value is NULL is when you have already determined from the column's information structure that IS_NOT_NULL() is true.
Client library functions that expect you to supply buffers generally want them to really exist. Consider the following example, which violates that principle:
char *from_str = "some string"; char *to_str; unsigned long len; len = mysql_real_escape_string (conn, to_str, from_str, strlen (from_str));
What's the problem? to_str must point to an existing buffer, and it doesn't?it's not initialized and may point to some random location. Don't pass an uninitialized pointer as the to_str argument to mysql_real_escape_string() unless you want it to stomp merrily all over some random piece of memory. | http://etutorials.org/SQL/MySQL/Part+II+Using+MySQL+Programming+Interfaces/Chapter+6.+The+MySQL+C+API/Miscellaneous+Topics/ | CC-MAIN-2018-30 | refinedweb | 2,766 | 56.08 |
Back to: Python Tutorials For Beginners and Professionals
Inter Thread communication in Python with Examples
In this article, I am going to discuss Inter Thread communication in Python with examples. Please read our previous article where we discussed Synchronization in Python. As part of this article, we are going to discuss the following pointers in detail.
- What is Inter Thread communication in Python?
- Inter Thread communication by using Event Objects
- Inter Thread communication by using Condition Object
- Inter Thread communication by using Queue in python
- Types of Queues in Python
- FIFO Queue in Python
- LIFO Queue in Python
- Priority Queue in Python
What is Inter Thread communication in Python?
Sometimes one thread may be required to communicate to another thread depending on the requirement. This is called inter thread communication. In Python, we can implement inter thread communication by using the following ways:
- Event
- Condition
- Queue
Inter Thread communication by using Event Objects
Event object is the simplest communication mechanism between the threads. One thread signals an event and other threads wait for it. We need to create an Event object as follows
event = threading.Event()
Event manages some internal flags which can be set or cleared using the methods on event object
set():
When we call for this method an internal flag will become True and it represents a GREEN signal for all waiting threads.
event.set()
clear():
When we call for this method an internal flag value will become False and it represents a RED signal for all waiting threads.
event.clear()
isSet():
This method can be used to check whether the event is set or not.
event.isSet()
wait() or wait(seconds):
This method can be used to make a thread wait until an event is set.
event.wait()
Program: Inter Thread communication by using Event Objects in python (demo26.py)
from threading import * import time def traffic_police(): while True: time.sleep(5) print("Traffic Police Giving GREEN Signal") event.set() time.sleep(10) print("Traffic Police Giving RED Signal") event.clear() def driver(): num=0 while True: print("Drivers waiting for GREEN Signal") event.wait() print("Traffic Signal is GREEN...Vehicles can move") while event.isSet(): num=num+1 print("Vehicle No:", num," Crossing the Signal") time.sleep(2) print("Traffic Signal is RED...Drivers have to wait") event=Event() t1=Thread(target=traffic_police) t2=Thread(target=driver) t1.start() t2.start()
Output:
In the above program driver thread has to wait until Trafficpolice thread sets the event. i.e until a green signal is given. Once the Traffic police thread sets an event (giving GREEN signal), vehicles can cross the signal. Again, when the traffic police thread clears the event (giving RED Signal) then the driver thread has to wait.
The traffic police thread sets the event for a period of 10 seconds before clearing it. After clearing the event it again sets it after a period of 5 seconds. The driver thread will be executed in that period of 10 seconds, event set period, and waits for the event to be set in the event clear period. Here, the event object is communicating between both the threads for the expected execution of the code.
The above program will never stop because the condition in the while loop is always True. You need to do Ctrl+c to stop the execution
Inter Thread communication by using Condition Object:
Condition is the more advanced version of Event object for inter thread communication. A Condition object can be created as follows
condition = threading.Condition()
Condition object allows one or more threads to wait until notified by another thread. Condition is always associated with a lock (ReentrantLock). A condition object has acquire() and release() methods that call the corresponding methods of the associated lock.
acquire():
This method is used to acquire the condition object. i.e thread acquiring the internal lock.
condition.acquire()
release():
This method is used to release the condition object i.e thread releases internal lock
condition.release()
wait()|wait(time):
This method is used to wait until getting notification or until the mentioned time expires
condition.wait()
notify():
This method is used to give notification for one waiting thread
condition.notify()
notifyAll():
This method is used to give notification for all waiting threads
condition.notifyAll()
Let’s consider the following use case where there are two threads, a producer thread and a consumer thread,
Producer Thread:
The producer thread needs to acquire the Condition object before producing items and should notify the remaining thread (in our case consumer thread) about it.
- Acquire the condition object i.e lock (Step 1.1)
- Produce the item (Step 1.2)
- Add the item to the resource (Step 1.3)
- Notify the other threads about it (Step 1.4)
- Release the condition object i.e lock (Step 1.5)
Consumer Thread:
The consumer thread must acquire the condition and then have to wait for the notification from the other thread(in our case producer thread).
- Acquire the condition object i.e lock (Step 2.1)
- Wait for the notification (Step 2.2)
- Use or consume the items (Step 2.3)
- Release the condition object i.e lock (Step 2.4)
Program: Inter Thread communication by using Condition Objects (demo27.py)
from threading import * import time import random items=[] def produce(c): while True: c.acquire() #Step 1.1 item=random.randint(1,10) #Step 1.2 print("Producer Producing Item:", item) items.append(item) #Step 1.3 print("Producer giving Notification") c.notify() #Step 1.4 c.release() #Step 1.5 time.sleep(5) def consume(c): while True: c.acquire() #Step 2.1 print("Consumer waiting for update") c.wait() #Step 2.2 print("Consumer consumed the item", items.pop()) #Step 2.3 c.release() #Step 2.4 time.sleep(5) c=Condition() t1=Thread(target=consume, args=(c,)) t2=Thread(target=produce, args=(c,)) t1.start() t2.start()
Output:
The above program will never stop because the condition in the while loop is always True. You need to do Ctrl+c to stop the execution
Inter Thread communication by using Queue in python:
Queues Concept is the most enhanced Mechanism for inter thread communication and to share data between threads. Queue internally has Condition functionality which inturn includes the Lock functionality. Hence whenever we are using Queue we are not required to worry about Synchronization.
If we want to use Queues we should import the queue module. import queue
We can create Queue object as: q = queue.Queue()
Two important methods of the queue module
- put(): To put an item into the queue.
- get(): To remove and return an item from the queue.
Lets understand how this works with the previous use case of producer and consumer threads
Producer Thread can use the put() method to insert data in the queue. Whenever the producer method calls the put method, internally, first it will acquire the lock and then insert the data into the queue and then release the lock automatically. The put method also checks whether the queue is full or not and if queue is full then the Producer thread will enter into the waiting state by calling wait() method internally.
Consumer Thread can use the get() method to remove and get data from the queue. Whenever the consumer method calls the get method, internally, first it will acquire the lock and then get/remove the data from the queue and then release the lock automatically. If the queue is empty then the consumer thread will enter into the waiting state by calling wait() method internally. Once the queue is updated with data then the thread will be notified automatically.
Program: Inter Thread communication by using Queue in python (demo28.py)
from threading import * import time import random import queue items=[] def produce(c): while True: item=random.randint(1,10) #Step 1.2 print("Producer Producing Item:", item) q.put(item) print("Producer giving Notification") time.sleep(5) def consume(c): while True: print("Consumer waiting for update") print("Consumer consumed the item", q.get()) time.sleep(5) q=queue.Queue() t1=Thread(target=consume, args=(q,)) t2=Thread(target=produce, args=(q,)) t1.start() t2.start()
Output:
The above program will never stop because the condition in the while loop is always True. You need to do Ctrl+c to stop the execution
Types of Queues in Python
Python supports 3 Types of Queues:
- FIFO Queue
- LIFO Queue
- Priority Queue
FIFO Queue in Python:
This is the default behavior of the queue. The order in which we get the items from the queue, is the order in which the items are put(FIFO-First In First Out).
Program: FIFO Queue in python (demo29.py)
import queue q=queue.Queue() q.put(10) q.put(5) q.put(20) q.put(15) while not q.empty(): print(q.get())
Output:
LIFO Queue in Python:
The order in which we get the items in the queue, is the reverse order of the items in which they are inserted (Last In First Out).
q=queue.LifoQueue()
Program: LIFO Queue in python (demo30.py)
import queue q=queue.LifoQueue() q.put(10) q.put(5) q.put(20) q.put(15) while not q.empty(): print(q.get())
Output:
Priority Queue in Python:
This queue stores the items based on some priority and returns the items according to that priority. In order to decide that priority to save the elements in the queue, python uses some sorting mechanisms.
Program: Priority Queue in python (demo31.py)
import queue q=queue.PriorityQueue() q.put(10) q.put(5) q.put(20) q.put(15) while not q.empty(): print(q.get())
Output:
If the data is non-numeric, then the internal sorting mechanisms will be able to decide on the priority for the items. In such cases, we can associate the elements with some weights based on which priority can be decided.
Program: Priority Queue in python (demo32.py)
import queue q=queue.PriorityQueue() q.put((1,'Ruhan')) q.put((3,'Sharuk')) q.put((4,'Ajay')) q.put((2,'Siva')) while not q.empty(): print(q.get()[1])
Output:
We can see from the output that the values are popped according to the ordered which we provided. The point to be noted here is the data should be inserted in the form of tuples as shown above. Even while getting the data, we can access both the weight as well as value using indexing. i.e q.get()[0] will give the weight and 1.get()[1] will give the value.
In the next article, I am going to discuss Database Connectivity in Python with Examples. Here, in this article, I try to explain Inter Thread communication in Python with Examples. I hope you enjoy this Inter Thread communication in Python with Examples article. I would like to have your feedback. Please post your feedback, question, or comments about this article. | https://dotnettutorials.net/lesson/inter-thread-communication-in-python/ | CC-MAIN-2021-31 | refinedweb | 1,822 | 58.79 |
Microsoft's official enterprise support blog for AD DS and more
Hi all, Ned here again. I have worked many slow boot and slow logon cases over my career. The Directory Services support team here at Microsoft owns a sizable portion of those operations - user credentials, user profiles, logon and startup scripts, and of course, group policy processing. If I had to pick the initial finger pointing that customers routinely make, it's GP. Perhaps it's because group policy is the least well-understood part of the process, or maybe because it's the one with the most administrative fingers in the pie. When it comes down to reality though, group policy is more often not the culprit. Our new changes in Windows 8 will help you make that determination much quicker now.
Today I am going to talk about one of those times that GPO is the villain. Well, sort of... he's at least an enabler. More appropriately, the optional WMI Filtering portion of group policy using the Win32_Product class. Win32_Product has been around for many years and is both an inventory and administrative tool. It allows you to see all the installed MSI packages on a computer, install new ones, reinstall them, remove them, and configure them. When used correctly, it's a valuable option for scripters and Windows PowerShell junkies.
Unfortunately, Win32_Product also has some unpleasant behaviors. It uses a provider DLL that validates the consistency of every installed MSI package on the computer - or off of it, if using a remote administrative install point. That makes it very, very slow.
Where people trip up usually is group policy WMI filters. Perhaps the customer wants to apply managed Internet Explorer policy based on the IE version. Maybe they want to set AppLocker or Software Restriction policies only if the client has a certain program installed. Perhaps even use - yuck - Software Installation policy in a more controlled fashion.
Today I talk about some different options. Mike didn’t write this but he had some good thoughts when we talked about this offline so he gets some credit here too. A little bit. Tiny amount, really. Hardly worth mentioning.
If you have no idea what group policy WMI filters are, start here:
Back? Great, let's get to it.
The Win32_Product WMI class is part of the CIMV2 namespace and implements the MSI provider (msiprov.dll and associated msi.mof) to list and validate installed installation packages. You will see MsiInstaller event 1035 in the Application log for each application queried by the class:
Source: MsiInstaller Event ID: 1035 Description: Windows Installer reconfigured the product. Product Name: <ProductName>. Product Version: <VersionNumber>. Product Language: <languageID>. Reconfiguration success or error status: 0.
Source: MsiInstaller Event ID: 1035 Description: Windows Installer reconfigured the product. Product Name: <ProductName>. Product Version: <VersionNumber>. Product Language: <languageID>. Reconfiguration success or error status: 0.
And constantly repeated System events:
Event Source: Service Control Manager
Event ID: 7035
Description:
The Windows Installer service was successfully sent a start control.
Event Type: Information
Event Source: Service Control Manager
Event ID: 7036
Description:
Event Source: Service Control Manager
Event ID: 7035
Description:
The Windows Installer service was successfully sent a start control.
Event Type: Information
Event ID: 7036
That validation piece is the real speed killer. So much, in fact, that it can lead to group policy processing taking many extra minutes in Windows XP when you use this class in a WMI filter - or even cause processing to time out and fail altogether.. This is even more likely when:
Furthermore, Windows Vista and later Windows versions cap WMI filters execution times at 30 seconds; if they fail to complete by then, they are treated as FALSE. On those OS versions, it will often appear that Win32_Product just doesn’t work at all.
What are your alternatives?
Depending on what you are trying to accomplish, Group Policy Preferences could be the solution. GPP includes item-level targeting that has fast, efficient filtering of just about any criteria you can imagine. If you are trying to set some computer-based settings that a user cannot change and don’t mind preferences instead of managed policy settings, GPP is the way to go. As with all software, make sure you evaluate our latest patches to ensure it works as desired. As of this writing, those are:
For instance, let's say you have a plotting printer that Marketing cannot correctly use without special Contoso client software. Rather than using managed computer policy to control client printer installation and settings, you can use GPP Registry or Printer settings to modify the values needed.
Then you can use Item Level Targeting to control the installation based on the specialty software's presence and version.
Alternatively, you can use the registry and file system for your criteria, which works even if the software doesn't install via MSI packages:
What to do if you really, really need to use a WMI filter to determine MSI installed versions and names though? If you look around the Internet, you will find a couple of older proposed solutions that - to be frank - will not work for most customers.
The Win32reg_AddRemovePrograms is not present on most client systems though; it is a legacy class, first delivered by the old SMS 2003 management WMI system. I suspect one of the reasons the System Center folks discarded its use years ago for their own native inventory system was the same reason that the customer class above doesn’t work in #2 - it didn’t return 32-bit software installed on 64-bit computers. The class has not been updated since initial release 10 years ago.
#2 had the right idea though, at least as a valid customer workaround to avoid using Win32_Product: by creating your own WMI class using the generic registry provider to examine just the MSI uninstall registry keys, you can get a fast and simple query that reasonably detects installed software. Armed with the "how", you can also extend this to any kind of registry queries you need, without risk of tanking group policy processing. To do this, you just need notepad.exe and a little understanding of WMI.
Windows Management Instrumentation uses Managed Operation Framework (MOF) files to describe the Common Information Model (CIM) classes. You can create your own MOF files and compile them into the CIM repository using a simple command-line tool called mofcomp.exe.
You need to be careful here. This means that once you write your MOF you should validate it by using the mofcomp.exe -check argument on your standard client and server images. It also means that you should test this on those same machines using the -class:createonly argument (and not setting the -autorecover argument or #PRAGMA AUTORECOVER pre-processor) to ensure it doesn't already exist. The last thing you want to do is break some other class.
When done testing, you're ready to give it a go. Here is a sample MOF, wrapped for readability. Note the highlighted sections that describe what the MOF examines and what the group policy WMI filter can use as query criteria. Unlike the oft-copied sample, this one understands both the normal native architecture registry path as well as the Wow6432node path that covers 32-bit applications installed on a 64-bit system.
Start copy below =======>
// "AS-IS" sample MOF file for returning the two uninstall registry subkeys
// Unsupported, provided purely as a sample
// Requires compilation. Example: mofcomp.exe sampleproductslist.mof
// Implements sample classes: "SampleProductList" and "SampleProductlist32"
// (for 64-bit systems with 32-bit software)
#PRAGMA AUTORECOVER
[dynamic, provider("RegProv"),
ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]
class SampleProductsList {
[key] string KeyName;
[read, propertycontext("DisplayName")] string DisplayName;
[read, propertycontext("DisplayVersion")] string DisplayVersion;
};
ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432node\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]
class SampleProductsList32 {
<======= End copy above
Examining this should also give you interesting ideas about other registry-to-WMI possibilities, I imagine.
Copy this sample to a text file named with a MOF extension, store it in the %systemroot%\system32\wbem folder on a test machine, and then compile it from an administrator-elevated CMD prompt using mofcomp.exe filename. For example:
To test if the sample is working you can use WMIC.EXE to list the installed MSI packages. For example, here I am on a Windows 7 x64 computer with Office 2010 installed; that suite contains both 64 and 32-bit software so I can use both of my custom classes to list out all the installed software:
Note that I did not specify a namespace in the sample MOF, which means it updates the \\root\default namespace, instead of the more commonly used \\root\cimv2 namespace. This is intentional: the Windows XP implementation of registry provider is in the Default namespace, so this makes your MOF OS agnostic. It will work perfectly well on XP, 2003, 2008, Vista, 7, or even the Windows 8 family. Moreover, I don’t like updating the CIMv2 namespace if I can avoid it - it already has enough classes and is a bit of a dumping ground.
Now I need a way to get this MOF file to all my computers. The easiest way is to return to Group Policy Preferences; create a GPP policy that copies the file and creates a scheduled task to run MOFCOMP at every boot up (you can change this scheduling later or even turn it off, once you are confident all your computers have the new classes).
You can also install and compile the MOF manually, use psexec.exe, make it part of your standard OS image, deploy it using a software distribution system, or whatever. The example above is just that - an example.
Now that all your computers know about your new WMI class, you can create a group policy WMI filter that uses it. Here are a couple examples; note that I remembered to change the namespace from CIMv2 to DEFAULT!
You're in business with a system that, while not optimal, is certainly is far better than Win32_Product. It’s fast and lightweight, relatively easy to manage, and like all adequate solutions, designed not to make things worse in its efforts to make things different.
AskDS contributor Fabian Müller had another idea that he uses with customers:
1. Define environment variables using GPP based on Registry Item-Level targeting filters or just deploy the variables during software installation phase, e.g. %IEversion%= 9
2. Use this environment variable in WMI filters like this: Root\CIMV2;SELECT VARIABLEVALUE FROM Win32_Environment WHERE NAME='IEversion' AND VARIABLEVALUE='9'
Disadvantage: First computer start or user logon will not pass the WMI filter since the ENV variable had to be created (if set by GPP). It would be better having this environment variable being created during softwareinstallation / deployment (or whatever software being deployed).
Advantage: The environment WMI query is very fast compared. And you can use it “multi-purpose”. For example, as part of CMD-based startup and logon scripts.
Software Installation policy is not designed to be an enterprise software management solution and neither are individual application self-update systems. SI works fine in a small business network as a "no frills" solution but doesn’t offer real monitoring or remediation, and requires too much of the administrator to manage. If you are using these because of the old "we only fix IT when it's broken" answer, one argument you might take to management is that you are broken and operating at great risk: you have no way to deploy non-Microsoft updates in a timely and reliable fashion.
Even though the free Windows Update and Windows Software Update Service support Windows, Office, SQL, and Exchange patching, it’s probably not enough; anyone with more than five minutes in the IT industry knows that all of your software should be receiving periodic security updates. Does anyone here still think it's safe to run Adobe, Oracle, or thousands of other vendor products without controlled, monitored, and managed patching? If your network doesn't have a real software patching system, it's like a building with no sprinklers or emergency exits: nothing to worry about… until there's a fire. You wouldn’t run computers without anti-virus protection, but the number of customers I speak to that have zero security patching strategy is very worrying.
It's not 1998 anymore, folks. A software and patch management system isn’t an option anymore if you have a business with more than a hundred computers; those days are done for everyone. Even for Apple, although they haven't realized it yet. We make System Center, but there are other vendors out there too, and I’d rather you bought a competing product than have no patch management at all.
Until next time,
- Ned "pragma-tism" Pyle
Hi.
USMT also now includes a number of other less sexy - but still important - changes. Here are the high points:
USMT also warns about the risks of using the /C option (rather than /VSC combined with ensuring applications are not locking files), and how many units were not migrated:
USMT also warns about the risks of using the /C option (rather than /VSC combined with ensuring applications are not locking files), and how many units were not migrated:
MIG_CATALOG_PRESERVE_MEMORY=1
MIG_CATALOG_PRESERVE_MEMORY=1
When set, loadstate trims its memory usage much more aggressively. The consequence of this is slower restoration, so don’t use this switch willy-nilly.
When set, loadstate trims its memory usage much more aggressively. The consequence of this is slower restoration, so don’t use this switch willy-nilly.
USMT 5.0 still works with Windows XP through Windows 7, and adds Windows 8 x86 and AMD64 support as well. All of the old rules around CPU architecture and application migration are unchanged in the beta version (USMT 6.2.8250)..
Ned “there are lots of new manifests too, but I just couldn’t be bothered” Pyle
Hi.
Remote RSOP logging and Group Policy refresh require that you open firewall ports on the targeted computers. This means allowing inbound communication for RPC, WMI/DCOM, event logs, and scheduled tasks. You can enable the built-in Windows Advanced Firewall inbound rules:
These are part of the “Remote Scheduled Tasks Management”, “Remote Event Log Management”, and “Windows Management Instrumentation” groups. These are TCP RPC port 135, named pipe port 445, and the dynamic ports associated with the endpoint mapper, like always.
Furthermore, remember that this article references a pre-release product. Microsoft does not support Windows 8 Consumer Preview or Windows Server "8" Beta in production environments unless you have a special agreement with Microsoft. Read that EULA you accepted when installing!
Ned “I used a fancy arrow!”?
Maybe a li'l ol' casting issue here
Ahh, that's better. Get out the hex converter.
Ahh, that's better. Get out the hex converter..
Is teaming network adapters on Domain Controllers supported by Microsoft? I found KB.
(Updated) Maybe! :-D. :)!
Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS3. Replica: if the current domain hosts DNS, install DNS
How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?
2008r2-01 2008r2-02
2. Run a FOR loop command to read that list and disable the specified user against each domain controller.
FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i}
get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}.
We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?]
Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?.
Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?).
One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:
Windows Server 2008
C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys
C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys…
- Ned "Jonathan is seriously going to kill me" Pyle | http://blogs.technet.com/b/askds/archive/2012/04.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2014-35 | refinedweb | 2,767 | 53.61 |
.
Launchers and Choosers
When we discussed storage earlier in this series, we introduced the concept of isolated applications. In the same way that storage is isolated such that you can’t access the data stored by another application, the application itself is isolated from the operating system..
For all these scenarios, the framework has introduced launchers and choosers, which are sets of APIs that demand a specific task from the operating system. Once the task is completed, control is returned to the application.
Launchers are “fire and forget” APIs. You demand the operation and don’t expect anything in return—for example, starting a phone call or playing a video.
Choosers are used to get data from a native application—for example, contacts from the People Hub—and import it into your app.
All the launchers and choosers are available in the
Microsoft.Phone.Tasks namespace and share the same behavior:
- Every launcher and chooser is represented by a specific class.
- If needed, you set some properties that are used to define the launcher or chooser’s settings.
- With a chooser, you’ll need to subscribe to the
Completedevent, which is triggered when the operation is completed.
- The
Show()method is called to execute the task.
Note: Launchers and choosers can’t be used to override the built-in Windows Phone security mechanism, so you won’t be able to execute operations without explicit permission from the user.
In the following sample, you can see a launcher that sends an email using the(); }
The following sample demonstrates how to use a chooser. We’re going to save a new contact in the People Hub using the
SaveContactTask class.”); } }
Every chooser returns a
TaskResult property, with the status of the operation. It’s important to verify that the status is
TaskResult.OK before moving on, because the user could have canceled the operation.
The following is a list of all the available launchers:
MapsDirectionTaskis used to open the native Map application and calculate a path between two places.
MapsTaskis used to open the native Map application centered on a specific location.
MapDownloaderTaskis used to manage the offline maps support new to Windows Phone 8. With this task, you’ll be able to open the Settings page used to manage the downloaded maps.
MapUpdaterTaskis used to redirect the user to the specific Settings page to check for offline maps updates.
ConnectionSettingsTaskis used to quickly access the different Settings pages to manage the different available connections, like Wi-Fi, cellular, or Bluetooth.
MarketplaceDetailTaskis used to display the detail page of an application on the Windows Phone Store. If you don’t provide the application ID, it will the open the detail page of the current application.
MarketplaceHubTaskis used to open the Store to a specific category.
MarketplaceReviewTaskis used to open the page in the Windows Phone Store where the user can leave a review for the current application.
MarketplaceSearchTaskis used to start a search for a specific keyword in the Store.
MediaPlayerLauncheris used to play audio or a video using the internal Windows Phone player. It can play both files embedded in the Visual Studio project and those saved in the local storage.
PhoneCallTaskis used to start a phone call.
ShareLinkTaskis used to share a link on a social network using the Windows Phone embedded social features.
ShareStatusTaskis used to share custom status text on a social network.
ShareMediaTaskis used to share one of the pictures from the Photos Hub on a social network.
SmsComposeTaskis used to prepare a text message and send it.
WebBrowserTaskis used to open a URI in Internet Explorer for Windows Phone.
SaveAppointmentTaskis used to save an appointment in the native Calendar app.
The following is a list of available choosers:
AddressChooserTaskis used to import a contact’s address.
CameraCaptureTaskis used to take a picture with the integrated camera and import it into the application.
PhoneNumberChooserTaskis used to import a contact’s phone number.
PhotoChooserTaskis used to import a photo from the Photos Hub.
SaveContactTaskis used to save a new contact in the People Hub. The chooser simply returns whether the operation completed successfully.
SaveEmailAddressTaskis used to add a new email address to an existing or new contact. The chooser simply returns whether the operation completed successfully.
SavePhoneNumberTaskis used to add a new phone number to an existing contact. The chooser simply returns whether the operation completed successfully.
SaveRingtoneTaskis used to save a new ringtone (which can be part of the project or stored in the local storage). It returns whether the operation completed successfully.
Getting Contacts and Appointments
Launchers already provide a basic way of interacting with the People Hub, but they always require user interaction. They open the People Hub and the user must choose which contact to import.).
In the following table, you can see which data you can access based on where the contacts are saved.
To know where the data is coming from, you can use the
Accounts property, which is a collection of the accounts where the information is stored. In fact, you can have information for the same data split across different accounts.
Working With Contacts
Each contact is represented by the
Contact class, which contains all the information about a contact, like
DisplayName,
Addresses,
Birthdays, etc. (basically, all the information that you can edit when you create a new contact in the People Hub).
Note: To access the contacts, you need to enable the
ID_CAP_CONTACTS option in the manifest file.
Interaction with contacts starts with the
Contacts class which can be used to perform a search by using the
SearchAsync() method. The method requires two parameters: the keyword and the filter to apply. There are two ways to start a search:
- A generic search: The keyword is not required since you’ll simply get all the contacts that match the selected filter. This type of search can be achieved with two filter types:
FilterKind.PinnedToStartwhich returns only the contacts that the user has pinned on the Start screen, and
FilterKind.Nonewhich simply returns all the available contacts.
- A search for a specific field: In this case, the search keyword will be applied based on the selected filter. The available filters are
DisplayName,
PhoneNumber.
The
SearchAsync() method uses a callback approach; when the search is completed, an event called
SearchCompleted is raised.
In the following sample, you can see a search that looks for all contacts whose name is John. The collection of returned contacts is presented to the user with a
ListBox control.; }
Tip: If you want to start a search for another field that is not included in the available filters, you’ll need to get the list of all available contacts by using the
FilterKind.None option and apply a filter using a LINQ query. The difference is that built-in filters are optimized for better performance, so make sure to use a LINQ approach only if you need to search for a field other than a name, email address, or phone number.
Working With Appointments
Getting data from the calendar works in a very similar way: each appointment is identified by the
Appointment class, which has properties like
Subject,
Status,
Location,
StartTime, and
EndTime.
To interact with the calendar, you’ll have to use the
Appointments class that, like the
Contacts class, uses a method called
SearchAsync() to start a search and an event called
SearchCompleted to return the results.
The only two required parameters to perform a search are the start date and the end date. You’ll get in return all the appointments within this time frame. Optionally, you can also set a maximum number of results to return or limit the search to a specific account.
In the following sample, we retrieve all the appointments that occur between the current date and the day before, and we display them using a
ListBox control.; }
Tip: The only way to filter the results is by start date and end date. If you need to apply additional filters, you’ll have to perform LINQ queries on the results returned by the search operation.
A Private Contact Store for Applications.
Windows Phone 8 has introduced a new class called
ContactStore.
The
ContactStore class belongs to the
Windows.Phone.PersonalInformation namespace and it offers a method called
CreateOrOpenAsync(). The method has to be called every time you need to interact with the private contacts book. If it doesn’t exist, it will be created; otherwise, it will simply be opened.
When you create a
ContactStore you can set how the operating system should provide access to it:
- The first parameter’s type is
ContactStoreSystemAccessMode, and it’s used to choose whether the application will only be able to edit contacts that belong to the private store (
ReadOnly), or the user will also be able to edit information using the People Hub (
ReadWrite).
- The second parameter’s type is
ContactStoreApplicationAccessMode, and it’s used to choose whether other third-party applications will be able to access all the information about our contacts (
ReadOnly) or only the most important ones, like name and picture (
LimitedReadOnly).
The following sample shows the code required to create a new private store:
private async void OnCreateStoreClicked(object sender, RoutedEventArgs e) { ContactStore store = await ContactStore.CreateOrOpenAsync(ContactStoreSystemAccessMode.ReadWrite, ContactStoreApplicationAccessMode.ReadOnly); }
Tip: After you’ve created a private store, you can’t change the permissions you’ve defined, so you’ll always have to call the
CreateOrOpenAsync() method with the same parameters.
Creating Contacts
A contact is defined by the
StoredContact class, which is a bit different from the
Contact class we’ve previously seen. In this case, the only properties that are directly exposed are
GivenName and
FamilyName. All the other properties can be accessed by calling the
GetPropertiesAsync() method of the
StoredContact class, which returns a collection of type
Dictionary<string, object>.
Every item of the collection is identified by a key (the name of the contact’s property) and an object (the value). To help developers access the properties, all the available keys are stored in an enum object named
KnownContactProperties. In the following sample, we use the key
KnowContactProperties.Email to store the user’s email address.(); }
Tip: Since the
ContactStore is a dictionary, two values cannot have the same key. Before adding a new property to the contact, you’ll have to make sure that it doesn’t exist yet; otherwise, you’ll need to update the existing one.
The
StoredContact class also supports a way to store custom information by accessing the extended properties using the
GetExtendedPropertiesAsync() method. It works like the standard properties, except that the property key is totally custom. These kind of properties won’t be displayed in the People Hub since Windows Phone doesn’t know how to deal with them, but they can be used by your application.
In the following sample, we add new custom information called
MVP Category:(); }
Searching for Contacts
Searching contacts in the private contact book is a little tricky because there’s no direct way to search a contact for a specific field.
Searches are performed using the
ContactQueryResult class, which is created by calling the
CreateContactQuery() method of the
ContactStore object. The only available operations are
GetContactsAsync(),which returns all the contacts, and
GetContactCountAsync(),which returns the number of available contacts.
You can also define in advance which fields you’re going to work with, but you’ll still have to use the
GetPropertiesAsync() method to extract the proper values. Let’s see how it works in the following sample, in which we look for a contact whose email address is
info@qmatteoq.com:!"); } } }
You can define which fields you’re interested in by creating a new
ContactQueryOptions object and adding it to the
DesiredFields collection. Then, you can pass the
ContactQueryOptions object as a parameter when you create the
ContactQueryResult one. As you can see, defining the fields isn’t enough to get the desired result. We still have to query each contact using the
GetPropertiesAsync() method to see if the information value is the one we’re looking for.
The purpose of the
ContactQueryOptions class is to prepare the next query operations so they can be executed faster.
Updating and Deleting Contacts
Updating a contact is achieved in the same way as creating new one: after you’ve retrieved the contact you want to edit, you have to change the required information and call the
SaveAsync() method again, as in the following sample:(); } } }
After we’ve retrieved the user whose email address is
info@qmatteoq.com, we change it to
mail@domain.com, and save
DeleteContactAsync() method on the
ContactStore object, passing as parameter the contact ID, which is stored in the
Id property of the
StoredContact class.); } } }
In the previous sample, after we’ve retrieved the contact with the email address
info@qmatteoq.com, we delete it using its unique identifier.
Dealing With Remote Synchronization.
For this scenario, the
StoredContact class offers a property called
RemoteId to store such information. Having a
RemoteId also simplifies the search operations we’ve seen before. The
ContactStore class, in fact, offers a method called
FindContactByRemoteIdAsync(), which is able to retrieve a specific contact based on the remote ID as shown in the following sample:); }
There’s one important requirement to keep in mind: the
RemoteId property’s value should be unique across any application installed on the phone that uses a private contact book; otherwise, you’ll get an exception.
In this article published by Microsoft, you can see an implementation of a class called
RemoteIdHelper that offers some methods for adding random information to the remote ID (using a GUID) to make sure it’s unique.
Taking Advantage of Kid's Corner
Kid’s Corner is an interesting and innovative feature introduced in Windows Phone 8 that is especially useful for parents of young children. Basically, it’s a sandbox that we can customize. We can decide which apps, games, pictures, videos, and music can be accessed.
As developers, we are able to know when an app is running in Kid’s Corner mode. This way, we can customize the experience to avoid providing inappropriate content, such as sharing features.
Taking advantage of this feature is easy; we simply check the
Modes property of the
ApplicationProfile class, which belongs to the
Windows.Phone.ApplicationModel namespace. When it is set to
Default, the application is running normally. If it’s set to
Alternate, it’s running in Kid’s Corner mode.
private void OnCheckStatusClicked(object sender, RoutedEventArgs e) { if (ApplicationProfile.Modes == ApplicationProfileModes.Default) { MessageBox.Show("The app is running in normal mode."); } else { MessageBox.Show("The app is running in Kid's Corner mode."); } }
Speech APIs: Let's Talk With the Application.
The purpose of speech services is to add vocal recognition support in your applications in the following ways:
- Enable users to speak commands to interact with the application, such as opening it and executing a task.
- Enable text-to-speech features so that the application is able to read text to users.
- Enable text recognition so that users can enter text by dictating it instead of typing it.
In this section, we’ll examine the basic requirements for implementing all three modes in your application.
Voice Commands.
The user simply has to speak a command; if it’s successfully recognized, the application will be opened and, as developers, we’ll get some information to understand which command has been issued so that we can redirect the user to the proper page or perform a specific operation. Add new item, you’ll find a template called
VoiceCommandDefinition in the Windows Phone section.
The following code sample is what a VCD file looks like:
<>
A VCD file can contain one or more
CommandSet nodes, which are identified by a
Name and a specific language (the
xml:lang attribute). The second attribute is the most important one. Your application will support voice commands only for the languages you’ve included in
CommandSet in the VCD file (the voice commands’ language is defined by users in the Settings page). You can have multiple
CommandSet nodes to support multiple languages.
Each
CommandSet can have a
CommandPrefix,
Example tag, which contains the text displayed by the Windows Phone dialog to help users understand what kind of commands they can use.
Then, inside a
CommandSet, you can add up to 100 commands identified by the
Command tag. Each command has the following characteristics:
- A unique name, which is set in the
Nameattribute.
- The
Exampletag shows users sample text for the current command.
ListenForcontains the text that should be spoken to activate the command. Up to ten
ListenFortags can be specified for a single command to cover variations of the text. You can also add optional words inside square brackets. In the previous sample, the
AddNotecommand can be activated by pronouncing both “add a new note” or “and add new note.”
Feedbackis the text spoken by Windows Phone to notify users that it has understood the command and is processing it.
NavigateTargetcan.
Once we’ve completed the VCD definition, we are ready to use it in our application.
Note: To use speech services, you’ll need to enable the
ID_CAP_SPEECH_RECOGNITION option in the manifest file.
Commands are embedded in a Windows Phone application by using a class called
VoiceCommandService, which belongs to the
Windows.Phone.Speech.VoiceCommands namespace. This static class exposes a method called
InstallCommandSetFromFileAsync(), which requires the path of the VCD file we’ve just created.
private async void OnInitVoiceClicked(object sender, RoutedEventArgs e) { await VoiceCommandService.InstallCommandSetsFromFileAsync(new Uri("ms-appx:///VoiceCommands.xml")); }
The file path is expressed using a
Uri that should start with the
ms-appx:/// prefix. This
Uri refers to the Visual Studio project’s structure, starting from the root.
Phrase Lists
A VCD file can also contain a phrase list, as in the following sample:
<>
Phrase lists are used to manage parameters that can be added to a phrase using braces. Each
PhraseList node is identified by a
Label attribute, which is the keyword to include in the braces inside the
ListenFor node. In the previous example, users can say the phrase “open the note” followed by any of the numbers specified with the
Item tag inside the
PhraseList. You can have up to 2,000 items in a single list..
The APIs offer a way to keep a
PhraseList dynamically updated, as demonstrated in the following sample:" }); }
First, you have to get a reference to the current command set by using the
VoiceCommandService.InstalledCommandSets collection. As the index, you have to use the name of the set that you’ve defined in the VCD file (the
Name attribute of the
CommandSet tag). Once you have a reference to the set, you can call the
UpdatePhraseListAsync() to update a list by passing two parameters:
- the name of the
PhraseList(set using the
Labelattribute)
- the collection of new items, as an array of strings
It’s important to keep in mind that the
UpdatePhraseListAsync() method overrides the current items in the
PhraseList, so you will have to add all the available items every time, not just the new ones.
Intercepting the Requested Command
The command invoked by the user is sent to your application with the query string mechanism discussed earlier in this series. When an application is opened by a command, the user is redirected to the page specified in the
Navigate node of the VCD file. The following is a sample URI:
The
voiceCommandName parameter contains the spoken command, while the
reco parameter contains the full text that has been recognized by Windows Phone.
If the command supports a phrase list, you’ll get another parameter with the same name of the
PhraseList and the spoken item as a value. The following code is a sample URI based on the previous note sample, where the user can open a specific note by using the
OpenNote command:
Using the APIs we saw earlier in this series, it’s easy to extract the needed information from the query string parameters and use them for our purposes, like in the following sample:; } } }
We use a
switch statement to manage the different supported commands that are available in the
NavigationContext.QueryString collection. If the user is trying to open a note, we also get the value of the
number parameter.
Working With Speech Recognition).
There are two ways to start speech recognition: by providing a user interface, or by working silently in the background.
In the first case, you can provide users a visual dialog similar to the one used by the operating system when holding the Start button. It’s the perfect solution to manage vocal commands because you’ll be able to give both visual and voice feedback to users.
This is achieved by using the
SpeechRecognizerUI class, which offers four key properties to customize the visual dialog:
ListenTextis the large, bold text that explains to users what the application is expecting.
Exampleis additional text that is displayed below the
ListenTextto help users better understand what kind of speech the application is expecting.
ReadoutEnabledis a
Booleanproperty; when it’s set to true, Windows Phone will read the recognized text to users as confirmation.
ShowConfirmationis another
Booleanproperty; when it’s set to true, users will be able to cancel the operation after the recognition process is completed.
The following sample shows how this feature is used to allow users to dictate a note. We ask users for the text of the note and then, if the operation succeeded, we display the recognized text.; } }
Notice how the recognition process is started by calling the
RecognizeWithUIAsync() method, which returns a
SpeechRecognitionUIResult object that contains all the information about the operation.
To silently recognize text, less code is needed since fewer options are available than were used for the dialog. We just need to start listening for the text and understand it. We can do this by calling the
RecognizeAsync() method of the
SpeechRecognizer class. The recognition result will be stored in a
SpeechRecognitionResult object, which is the same that was returned in the
RecognitionResult property by the
RecognizeWithUIAsync() method we used previously.
Using Custom Grammars
The code we’ve seen so far is used to recognize almost any word in the dictionary. For this reason, speech services will work only if the phone is connected to the Internet since the feature uses online Microsoft services to parse the results..
For this scenario, the Speech APIs provide a way to use a custom grammar and limit the number of words that are supported in the recognition process. There are three ways to set a custom grammar:
- using only the available standard sets
- manually adding the list of supported words
- storing the words in an external file
Again, the starting point is the
SpeechRecognizer class, which offers a property called
Grammars.
To load one of the predefined grammars, use the
AddGrammarFromPredefinedType() method, which accepts as parameters a string to identify it (you can choose any value) and the type of grammar to use. There are two sets of grammars: the standard
SpeechPredefinedGrammar.Dictation, and
SpeechPredefinedGrammar.WebSearch, which is optimized for web related tasks.
In the following sample, we recognize speech using the
WebSearch grammar:; }
Even more useful is the ability to allow the recognition process to understand only a few selected words. We can use the
AddGrammarFromList() method offered by the
Grammars property, which requires the usual identification key followed by a collection of supported words.
In the following sample, we set the
SpeechRecognizer class to understand only the words “save” and “cancel”."); } }
If the user says a word that is not included in the custom grammar, the
Text property of the
SpeechRecognitionResult object will be empty. The biggest benefit of this approach is that it doesn’t require an Internet connection since the grammar is stored locally.
The third and final way to load a grammar is by using another XML definition called
Speech Recognition Grammar Specification (SRGS). You can read more about the supported tags in the official documentation by W3C.
The following sample shows a custom grammar file:
<>
The file describes both the supported words and the correct order that should be used. The previous sample shows the supported commands to manage notes in an application, like “Open the note” or “Load a reminder,” while a command like “Reminder open the” is not recognized.
Visual Studio 2012 offers built-in support for these files with a specific template called
SRGS Grammar that is available when you right-click your project and choose Add new item.
Once the file is part of your project, you can load it using the
AddGrammarFromUri() method of the
SpeechRecognizer class that accepts as a parameter the file path expressed as a
Uri, exactly as we’ve seen for VCD files. From now on, the recognition process will use the grammar defined in the file instead of the standard one, as shown in the following sample:"); } }
Using Text-to-Speech (TTS)
Text-to-speech is a technology that is able to read text to users in a synthesized voice. It can be used to create a dialogue with users so they won’t have to watch the screen to interact with the application.
The basic usage of this feature is really simple. The base class to interact with TTS services is
SpeechSynthesizer, which offers a method called
SpeakTextAsync(). You simply have to pass to the method the text that you want to read, as shown in the following sample:
private async void OnSpeakClicked(object sender, RoutedEventArgs e) { SpeechSynthesizer synth = new SpeechSynthesizer(); await synth.SpeakTextAsync("This is a sample text"); }
Moreover, it’s possible to customize how the text is pronounced by using a standard language called Synthesis Markup Language (SSML), which is based on the XML standard. This standard provides a series of XML tags that defines how a word or part of the text should be pronounced. For example, the speed, language, voice gender, and more can be changed.
The following sample is an example of an SSML file:
<?xml version="1.0"?> <speak xmlns="" xmlns: <voice age="5">This text is read by a child</voice> <break /> <prosody rate="x-slow"> This text is read very slowly</prosody> </speak>
This code features three sample SSML tags:
voice for simulating the voice’s age,
break to add a pause, and
prosody to set the reading speed using the
rate attribute.
There are two ways to use an SSML definition in your application. The first is to create an external file by adding a new XML file in your project. Next, you can load it by passing the file path to the
SpeakSsmlFromUriAsync() method of the
SpeechSynthesizer class, similar to how we loaded the VCD file.
private async void OnSpeakClicked(object sender, RoutedEventArgs e) { SpeechSynthesizer synth = new SpeechSynthesizer(); await synth.SpeakSsmlFromUriAsync(new Uri("ms-appx:///SSML.xml")); }
Another way is to define the text to be read directly in the code by creating a string that contains the SSML tags. In this case, we can use the
SpeakSsmlAsync() method which accepts the string to read as a parameter. The following sample shows the same SSML definition we’ve been using, but stored in a string instead of an external file.()); }
You can learn more about the SSML definition and available tags in the official documentation provided by W3C.
Data Sharing
Data sharing is a new feature introduced in Windows Phone 8 that can be used to share data between different applications, including third-party ones.
There are two ways to manage data sharing:
- File sharing: The application registers an extension such as
.log. It will be able to manage any file with the registered extension that is opened by another application (for example, a mail attachment).
- Protocol sharing: The application registers a protocol such as
log:. Other applications will be able to use it to send plain data like strings or numbers.
In both cases, the user experience is similar:
- If no application is available on the device to manage the requested extension or protocol, users will be asked if they want to search the Store for one that can.
- If only one application is registered for the requested extension or protocol, it will automatically be opened.
- If multiple applications are registered for the same extension or protocol, users will be able to choose which one to use.
Let’s discuss how to support both scenarios in our application.
Note: There are some file types and protocols that are registered by the system, like Office files, pictures, mail protocols, etc. You can’t override them; only Windows Phone is able to manage them. You can see a complete list of the reserved types in the MSDN documentation.
File Sharing View code option.
The extension is added in the
Extensions section, which should be defined under the
Token one:
>
Every supported file type has its own
FileTypeAssociation tag, which is identified by the
Name attribute (which should be unique). Inside this node are two nested sections:
Logosis.
SupportedFileTypesis required because it contains the extensions that are going to be supported for the current file type. Multiple extensions can be added.
The previous sample is used to manage the
.log file extension in our application.
When another application tries to open a file we support, our application is opened using a special URI:
/FileTypeAssociation?fileToken=89819279-4fe0-9f57-d633f0949a19
The
fileToken parameter is a GUID that univocally identifies the file—we’re going to use it later.
To manage the incoming URI, we need to introduce the
UriMapper class we talked about earlier in this series. When we identify this special URI, we’re going to redirect the user to a specific page of the application that is able to interact with the file.
The following sample shows what the
UriMapper looks like:; } }
If the starting
Uri contains the
FileTypeAssociation keyword, it means that the application has been opened due to a file sharing request. In this case, we need to identify the opened file’s extension. We extract the
fileToken parameter and, by using the
GetSharedFileName() of the
SharedAccessManager class (which belongs to the
Windows.Phone.Storage namespace), we retrieve the original file name.
By reading the name, we’re able to identify the extension and perform the appropriate redirection. In the previous sample, if the extension is
.log, we redirect the user to a specific page of the application called
LogPage.xaml. It’s important to add to the
Uri the
fileToken parameter as a query string; we’re going to use it in the page to effectively retrieve the file. Remember to register the
UriMapper in the
App.xaml.cs file, as explained earlier in this series.
Tip: The previous
UriMapper.
Now it’s time to interact with the file we received from the other application. We’ll do this in the page that we’ve created for this purpose (in the previous sample code, it’s the one called
LogPage.xaml).
We’ve seen that when another application tries to open a
.log file, the user is redirected to the
LogPage.xaml page with the
fileToken parameter added to the query string. We’re going to use the
OnNavigatedTo event to manage this scenario:”]); } }
Again we use the
SharedStorageAccessManager class, this time by invoking the
CopySharedFileAsync() method. Its purpose is to copy the file we received to the local storage so that we can work with it.
The required parameters are:
- A
StorageFolderobject, which represents the local storage folder in which to save the file (in the previous sample, we save it in the root).
- The name of the file.
- The behavior to apply in case a file with the same name already exists (by using one of the values of the
NameCollisionOptionenumerator).
- The GUID that identifies the file, which we get from the
fileTokenquery string parameter.
Once the operation is completed, a new file called
file.log will be available in the local storage of the application, and we can start playing with it. For example, we can display its content in the current page.
How to Open a File
So far we’ve seen how to manage an opened file in our application, but we have yet to discuss how to effectively open a file.
The task is easily accomplished by using the
LaunchFileAsync() method offered by the
Launcher class (which belongs to the
Windows.System namespace). It requires a
StorageFile object as a parameter, which represents the file you would like to open.
In the following sample, you can see how to open a log file that is included in the Visual Studio project:
private async void OnOpenFileClicked(object sender, RoutedEventArgs e) { StorageFile storageFile = await Windows.ApplicationModel.Package.Current.InstalledLocation.GetFileAsync(“file.log”); Windows.System.Launcher.LaunchFileAsync(storageFile); }
Protocol Sharing
Protocol sharing works similarly to file sharing. We’re going to register a new extension in the manifest file, and we’ll deal with the special URI that is used to launch the application.
Let’s start with the manifest. In this case as well, we’ll have to add a new element in the
Extensions section that can be accessed by manually editing the file through the
View code option.
<Extensions> <Protocol Name=“log” NavUriFragment=“encodedLaunchUri=%s” TaskID=“_default” /> </Extensions>
The most important attribute is
Name, which identifies the protocol we’re going to support. The other two attributes are fixed.
An application that supports protocol sharing is opened with the following URI:
/Protocol?encodedLaunchUri=log:ShowLog?LogId=1
The best way to manage it is to use a
UriMapper class, as we did for file sharing. The difference is that this time, we’ll look for the
encodedLaunchUri parameter. However, the result will be the same: we will redirect the user to the page that is able to manage the incoming information.; } }
In this scenario, the operation is simpler. We extract the value of the parameter
LogId and pass it to the
LogPage.xaml page. Also, we have less work to do in the landing page; we just need to retrieve the parameter’s value using the
OnNavigatedTo event, and use it to load the required data, as shown in the following sample:
protected override void OnNavigatedTo(NavigationEventArgs e) { if (NavigationContext.QueryString.ContainsKey(“LogId”)) { string logId = NavigationContext.QueryString[“LogId”]; MessageBox.Show(logId); } }
How to Open a URI
Similar to file sharing, other applications can interact with ours by using the protocol sharing feature and the
Launcher class that belongs to the
Windows.System namespace.
The difference is that we need to use the
LaunchUriAsync() method, as shown in the following sample:
private async void OnOpenUriClicked(object sender, RoutedEventArgs e) { Uri uri = new Uri(“log:ShowLog?LogId=1”); await Windows.System.Launcher.LaunchUriAsync(uri); }
Conclusion
In this tutorial, we’ve examined various ways to integrate our application with the features offered by the Windows Phone platform:
- We started with the simplest integration available: launchers and choosers, which are used to demand an operation from the operating system and eventually get some data in return.
-.
- We briefly talked about how to take advantage of Kid’s Corner, an innovative feature introduced to allow kids to safely use the phone without accessing applications that are not suitable for them.
- We learned how to use one of the most powerful new APIs added in Windows Phone 8: Speech APIs, to interact with our application using voice commands.
- We introduced data sharing, which is another new feature used to share data between different applications, and we can manage file extensions and protocols.
This tutorial represents a chapter from Windows Phone 8 Succinctly, a free eBook from the team at Syncfusion.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/windows-phone-8-succinctly-integrating-with-the-operating-system--cms-23298 | CC-MAIN-2018-39 | refinedweb | 5,962 | 52.19 |
I was wondering what would be the recommended approach to deal with PackedSequence objects in seq2seq models when inferring future values over a variable amount of step. Without, packed sequence and fixed amount of step I’d usually write the following in the forward function of my model:
def forward(self, sequence: torch.Tensor, steps: int, hx: torch.Tensor = None): hidden_states, hx = self.encoder(sequence, hx) #... h_t = hidden_states[:, -1, :].unsqueeze(1) h_t_inf = [] for _ in range(steps): h_t, hx = self.decoder(h_t, hx) h_t_inf.append(self.activation(h_t)) h_t_inf = torch.cat(h_t_inf, axis=1) forecast = self.output(h_t_inf) return forecast
However, how would that work if the
sequence object would be a PackedSequence and steps would be a variable number of steps like in a tensor? I’ve been able to deal with PackedSequence using the solution provided with this explanation however, I am not sure how to deal with variable length steps efficiently.
def forward(self, sequence: PackedSequence steps: torch.Tensor, hx: torch.Tensor = None): #how would you deal with for _ in range(steps) where steps is variable | https://discuss.pytorch.org/t/using-packedsequence-efficiently-in-forecasting-seq2seq-model/59799 | CC-MAIN-2022-33 | refinedweb | 180 | 60.21 |
The STL and Boost are just awesome, even when not using a OO programming style a C++ programmer has some very nice libraries available. Being able to use vectors is nice. Having many good libraries is nice:
But otherwise, I mostly agree with the author. C has shortcomings, but they are well documented and tools are available to deal with them.
Edited 2013-01-11 00:07 UTC
What are you talking about?! The STL and Boost are bloated and ugly; and worst of all they produce the most useless, verbose, garbage compile-time errors. (Admittedly the ugliness of those errors are partly due to C++'s template implementations, and STL and Boost are far and away the best set of libraries for C++)
The problem there is the lack of adequate tooling for C++. C++ templating should have been largely fixed by having Concepts. Too bad it didn't make it into the standard..
-_3<<
Vectors are nice (as are Lists) but those alone aren't good enough reasons to use C++.
One thing that's always confused me is the odd separation between the STL and language features. For example, why are Iterators an STL class rather than being a language construct, along with some syntactic sugar like a foreach statement?
One of the biggest reasons why C++ is the complicated language it is is due to the design principle they used that most of the heavy lifting be done in libraries.
The STL and TR/Boost libraries serve as a demonstration of C++ language features.
There's also the other principle of "you don't pay for what you don't use". Having library features as part of the language features could result in unwanted features being pulled into low level code. Having a separate library is a very strong signal that the inclusion of heavy weight stuff is intentional when it's in code.
That explains my question regarding Iterators, but I still find the split rather odd and slightly arbitrary.
Of course my complaint is technically invalid since C++11 added syntactic sugar for foreach loops, so at least the syntax isn't as ugly now.
I really liked Smalltalk, but it was just too darn slow and unusable in the real world. Then, 10 years ago, I tried coding in Objective-C on Linux (with GNUstep) and I haven't changed back ever again. Since it's a pure extension on top of C, I can still write all of my low-level algorithms in C and the higher level constructs in Objective-C. And GNUstep (and OpenStep on which it's based) is really a freakin' great library. Even for non-graphical work it's super simple:
1) There are two classes implementing collections (NSArray, NSDictionary), not a bajillion.
2) The garbage collector is extremely predictable (ref counts).
3) The class hierarchy is very shallow (at most 1-2 superclasses for 95% of the entire library), so it's easy to memorize.
4) I can intermingle low-level concepts (e.g. sockets) with GNUstep constructs (NSFileHandle) and it works just fine.
5) The loose typing and built-in dynamic reflection makes many tasks super simple (e.g. -[NSArray makeObjectsPerformSelector:]).
All in all, the idea mix of high-level abstraction and low-level grunt for my taste.
And are you sure you don't need other data structures? Like, you know, trees & such? Bitmaps?
You can write C++ code without ever using delete, using things like smart pointers (ref counted).
The point of a framework is to offer you features, not to be small. You don't need to memorize it. You should always be able to access the documentation. Qt is this sort of framework that is elegant and can help you write clean code.
When did lack of features became such a great thing?_4<<.
bram,
That's an interesting link. From a technical standpoint I don't think anything is inherently bad with C++ features. However there's no doubt that it encourages rather different approaches to software design.
C programs often use no abstractions whatsoever and will directly call the external libraries and kernel.
C++ facilitates rich levels of abstraction, which is generally a selling point for developers to choose it over C, and yet these very abstractions can be responsible for adding many more inefficient layers than we typically find in C programs.
OOP interfaces significantly help with contract-oriented programming in teams and help make problems much more manageable. I think a good OOP programmer will know where to draw the line without going crazy about everything needing to be a proper object.
There's no reason a high performance game should not be written in C++, just be mindful of too much indirection in critical loops.
Edited 2013-01-11 06:37 UTC
I think that the author's enthusiasm in the productivity arena is somewhat misguided. I love C but I don't find it as productive as C++ (especially when compared with C++11). It may be a personal feeling, but the OOP support that C++ offers makes you more productive, makes it easier to build higher scale applications.
And when you go into the C# development, comparing C with C# in the area of productivity is just mean towards C. I'm glad the author didn't insist much on the productivity part, because that would've been wrong. C is less productive, but it has all the other things that the author highlights.
C++ is too big for low level stuff and too complicated for higher level stuff. It's almost impossible to master the language... really really hard (at least for stupid people like me... and We're the 99% xD).
I think the smart way to go is plain old C for system level stuff and Java or C# for user level stuff.
Android is the perfect example of this kind of mixture. Profit.
Why are you using all the bells and whistles of C++ for the low level stuff?
Qt seems to be doing all right.
Mastering the entire language is only really necessary if you're creating libraries for wide consumption. The difficult C++ features are there to achieve generalization while still retaining static pre-compile type checking.
C++ is so big that you are always learning the language, wasting precious time that can be used solving the actual problem.
I don't think that's true. I've dived through the code of C++ libraries like Qt, Boost, STL, scene graphs and game engines, and they haven't really used language features that aren't in common use.
Even the stupidest people, like myself, can read Java code and get a general idea of what it does. It's a very simple and beautiful language to read.
Over engineering is not a language problem. You can over-engenier in assembler if you want.
The problem with C++ is the language complexity itself, It's difficult with or without over engineering.
Let's blame the "German language designers" for creating a language that not all people can understand.
It's easy to mock, but his point was not unreasonable - Java is a much simpler language than C++, without templates, pointers, etc. A non-Java programmer looking to read Java code is going to have a much easier job understanding what he sees, than a non-C++ programmer looking to read C++ code.
Java has generics. It even looks like C++ template syntax. Except with Java generics, you lose type information so you can't write:
if( obj instanceof HashMap<String, String> )
You have to write:
if( obj instanceof HashMap<?, ?> )
In my experience, Java's over reliance on inheritance makes the behaviour of Java code very hard to figure out just by looking at it. In general I find dependence on a debugger to contribute more to unreadable design than anything else..
If someone values designing code using the object oriented paradigm (OOP) then it would be expected this same person would use a language that has explicit/stronger support for OOP features (e.g. C++) rather than spending time with a language having weaker support for these features (e.g. ad-hoc simulation of OOP features in C that are rigidly implemented in C++). The issues are if OOP is applicable to a project and if the coder(s) have decent design/knowledge skills to realise any relevance of OOP for the respective coding project.
I find OOP-based design, at the application level, to be a natural stance for dealing with software-coding challenges. I suppose kernel/embedded-level code is another scenario for which my stances may not necessarity be applicable.
C++ does not force a specific form of OOP.
It provides a platform for the production of OOP code according to a style/complexity paradigm maintained/enforced by the coder.
The existence of any deleterious level of "type complexity" and "interface interdependency" would more reflect a poor code design and issues with the associated human software designers/implementers. Sure, the C++ language contains the facilities to produce "deleterious" code but the language itself does not force a person to code in a deleterious fashion. Inadequate design/knowledge skills, in the context of the project, are to blame for this. Sometimes coders get out of their depth (e.g. impatient, temporal-based deadlines, etc.) and create badly designed/implemented code.
The author imparts a "scary" scenario for the existence of "complex types". We should not forget that these "complex" types (i.e. C++ classes,which should not be designed as being difficult to use) manage the complexity of the code base through OOP-related mechanisms such as encapsulation, polymorphism, interface visibility, etc. Instances of C++ classes (i.e. C++ objects) are "fun" to use.
It's about the design.
If "bloated" frameworks/libraries exist and are deemed a "bad" idea, it is not the fault of the language but the fault of the software designer(s).
Sure C++ is not perfect, but I never think about going back to C for my application-level libraries/executables. Minimally, C++ can be used as a "better" C in those procedural-type non-OOP programs.
Anyway, if someone chooses to stick with C ... that's fine.
If someone chooses to stick with C++ ... that's also fine.
At the end of the day we all have differently wired minds that handle complexity in their own special way and it is up to the individual to select a software language they are comfortable with in order to solve the respective software coding challenges.
That's all folks.
The author is no doubt enthusiastic about C, but there are quite a few things he gets wrong.
At the time C was developed, even during the 80's, there were languages which had better compilation speeds like Turbo Pascal and Modula-2.
The UNIX guys also decided to ignore the languages of their time, which provided already better type checking than C does.
C main weakness:
- no modules
- no way to namespace identifiers besides 70's like hacks
- null terminated strings are a open door to security exploits
- the way arrays decay into pointers ditto
- weak type checking
- pointer aliasing forbids certain types of optimizations
C developers used to complain about Pascal type safety in the 80, but if you security conscious:
- Read MISRA C
- Enable all warnings as errors
- Use static analyzers as part of the build process
Funny enough with those steps C gets to be almost as safe as the Pascal family of languages.
C needs to get replaced for us to get better security, as well as its Objective-C and C++ descendants.
The latter do offer more secure language constructs, but they are undermined by the C constructs they also support.
Even Ritchie did recognize that C has a few issues:
If UNIX did not get copied in all universities all over the world in the early 80's, C would just be another language in history books.
Oh cry me a river.
And there is no way you could possibly be wrong about that!
Sure it has them, though presuming you didn't know about them, you have no idea what "static" on globals and functions means..
the way arrays decay into pointers ditto
I think you're more complaining about a lack of automatic range checking. Sometimes it's useful, sometimes it isn't (mainly by introducing invisible glue code that can mess up some assumptions, e.g. atomicity).
Compared to what? For my tastes the type checking in C is pretty strong.
"restrict" has been in C since 1999. Your complaint is 13 years out of date.
They ask for money, no thanks. (NASA coding guidelines are an interesting read though.)
Not all warnings are errors and while it helps in development, it's very bad for e.g. libraries to ship code with -Werror enabled, since a change in compiler versions can introduce new/different checks or obsolete build options and thus make break your build.
lint has been in Unix since V7 (1979).
Something stinks with the notable stench of "smug".
It is true that C gives you greater freedom in certain things and that includes shooting yourself in the foot. No question about it. On the other hand, certain things are much easier and better done in C (low-level data crunching routines).
And there he claims, among other things, that C type specifications are richer than Pascal's, directly contradicting your earlier statement. Be careful who you cite to support your case.
Looks like somebody's got an axe to grind.
And there is no way you could possibly be wrong about that! "
Then why did they not use Algol 60 or PL/I, which were the system programming languages of the time?
Sure it has them, though presuming you didn't know about them, you have no idea what "static" on globals and functions means. "
It is has not.
Separate compilation is a primitive form of modules, but it is not the same.. "
Sure we could also keep on using the original UNIX, why care about progress?
the way arrays decay into pointers ditto
I think you're more complaining about a lack of automatic range checking. Sometimes it's useful, sometimes it isn't (mainly by introducing invisible glue code that can mess up some assumptions, e.g. atomicity). "
Forget about the NULL character, boom!
Arrays should not be manipulated as pointers.
As for the usual C argument about arrays bound checking, all modern languages that compile to native code allow for selective disable of bounds checking if required.
Compared to what? For my tastes the type checking in C is pretty strong. "
Almost every other static type language out there?
"restrict" has been in C since 1999. Your complaint is 13 years out of date. "
Except, like register, the compiler is free to ignore it and besides gcc and clang many C vendors still don't fully implement C99.
They ask for money, no thanks. (NASA coding guidelines are an interesting read though.) "
In our society people tend to get money for their work.
Not all warnings are errors and while it helps in development, it's very bad for e.g. libraries to ship code with -Werror enabled, since a change in compiler versions can introduce new/different checks or obsolete build options and thus make break your build. "
You can always turn off false positives.
lint has been in Unix since V7 (1979). "
Sure, but:
- UNIX is just one OS among many
- Being available does not mean developers use it
It is true that C gives you greater freedom in certain things and that includes shooting yourself in the foot. No question about it. On the other hand, certain things are much easier and better done in C (low-level data crunching routines). "
Like in many other languages.
And there he claims, among other things, that C type specifications are richer than Pascal's, directly contradicting your earlier statement. Be careful who you cite to support your case. "
Sure, when compared with the original ISO Pascal. The ISO Extended Pascal as any other Pascal dialects are more expressive than C.
Looks like somebody's got an axe to grind. "
UNIX is a good operating system, but it is not the god of operating systems.?
Wow, FreePascal on Windows user, I presume. The aura of your smugness makes my screen damp. Simply because something is old doesn't mean it's bad.
Why do you think that "more features" = "progress"? Natural languages, for instance, often times simplify their grammar through more use (e.g. the regularization of the past-tense form of verbs in English).
NULL is actually a macro typically defined to ((void*)0), but I understand if your case-insensitive eyes don't see the difference.
Offhand remarks get offhand responses.
Yes, your personal opinion on the matter is really insightful.
This would be easy enough to introduce into C as well, say, by introducing an "array" keyword in variable definitions (which would turn on dynamic range checks). No need to redesign the language from the ground up. Might be nice to have, no dispute there..
Oh my, is support really that bad?! Oh wait, you just pulled that one straight out of your behind:
Mind you, GCC still isn't fully C99 compatible: so even your "GCC and clang" statement is false. Do you even google before you assert something?
That's not what I meant. I mean is that some random dude on the Internet isn't going to persuade me to spend money on a product that he insinuates will "fix" my coding practices.
Obviously you've never built any bigger product somebody else coded from source, have you? I don't have the time to comb through somebody else's build tools to find which options fail and how to disable them.
Other platforms have other static analyzers. But UNIX has been the platform of birth for C, so I just wanted to show you that what you say here is not news to native C coders.
So you think forcing people is the proper approach? Have you ever considered that some people might not like that? Of course not, you are beyond error (see "arrays shouldn't be treated as pointers" comment above).
What I was talking about is the contrast between highly abstract languages (Objective-C's OO part, Java, etc.) and lower abstraction languages (C). Of course you can write data crunching in other low-level languages (even Assembly for that matter)..
I never said so, but it's history is intertwined with C, so knowing about UNIX gives you a good idea how C developed.). "
Well yes.
I never used FreePascal though.
I am old enough to have used C and Pascal when the first compilers were being made available in ZX Spectrums.
I didn't say they were top quality, only better than C.
According to epoch documentation many companies did license languages on those days.? "
I only care about the Computer Science definition and C does not offer that.
Wow, FreePascal on Windows user, I presume. "
Never used Free Pascal.
Windows is just another operating system among many.
Why do you think that "more features" = "progress"?
Usually more features tend to improve programmer's productivity.
Well, actually English grammar was originally simplified thanks to the Normand occupation:
NULL is actually a macro typically defined to ((void*)0), but I understand if your case-insensitive eyes don't see the difference.
Offhand remarks get offhand responses. "
I have seen enough code like str[index] = NULL to make that statement.
Not everyone writes str[index] = '\0', specially the ones that like to turn off warnings.
Too many scars of off-shoring projects.
Yes, your personal opinion on the matter is really insightful. "
I imagine any security expert would agree, but I might be wrong.. "
Should I now give a CS lecture about classes of static typing?
Oh my, is support really that bad?! Oh wait, you just pulled that one straight out of your behind: "
I am well aware of that Wikipedia page.
There many more C vendors than what Wikipedia lists and many customers don't let you choose the compiler to use.
That assertion was because many think that gcc and clang are the only compilers that matter.
Obviously you've never built any bigger product somebody else coded from source, have you? "
Well, have you ever done a 300+ developers multi-site project?
Other platforms have other static analyzers. But UNIX has been the platform of birth for C, so I just wanted to show you that what you say here is not news to native C coders. "
To expert C coders, you mean.
So you think forcing people is the proper approach? Have you ever considered that some people might not like that? Of course not, you are beyond error (see "arrays shouldn't be treated as pointers" comment above). "
No, I am just a meaningless person in this world.
Just stating my opinion.. "
Point taken.
I never said so, but it's history is intertwined with C, so knowing about UNIX gives you a good idea how C developed. "
Agreed.
And here we go with the blanket statements again, asserted without proof.
Yep, they did, but Bell Labs obviously didn't feel the needed to.
Oh really? Then I quote from the linked Wikipedia page you linked:
Since 1989 C has had rigorous support for this kind of compartmentalization (.h interface files, .c implementation files) and virtually every single project I've ever laid eyes on has used it. Of course it's rudimentary, but there are higher constructs on top which provide most, if not all of the features you would expect.
By that logic, more controls on a car means better/safer/more productive driving. In actuality, Einstein's famous (possibly apocryphal) quote captures reality much better: "Everything should be made as simple as possible, but not simpler."
I said 'often times', not 'always'. Obviously at times languages expand in complexity to incorporate new necessary concepts and there's nothing wrong with that.
These programmers deserve a kick in the nuts for the above construct. This will produce warnings (implicit cast of void * to char) and is indicative of the fact that the author doesn't actually understand how computers work. I personally prefer str[index] = 0; Same meaning, less text, clearer.
Ah well, if you buy cheap product, don't be surprised when it turns out to be shoddy. I have the same experience with offshore. Having a "safer" language means they will just provide you with dumber code monkeys.
Security is not a simple yes/no game - the most secure computer is one that is off. It's all about finding middle ground. Some code warrants your approach, some doesn't. Making blanket statements, however, will guarantee that at times you will throw the baby out with the bathwater.
If you can't support your claims, don't make them. But before you dig into it, take a look at:...
So any statements you present will most likely just express your personal opinions on the matter. Oh and I have an MS in CS, so I've heard them before (including "C sux", "C rocks" and "Let's code everything in Prolog").
There many more C vendors than what Wikipedia lists and many customers don't let you choose the compiler to use.
If your hands our bound by your customer, then I suspect you have other problems in your project, not just with the language.
I was talking about things like X.org, KDE, the Linux kernel, Illumos, etc. These are humongous code bases with tons of external dependencies and when doing a project that uses them, I don't have the time to go through each and every piece and fix a maintainer's bad assumptions about build environments, often times just to test a solution. That's why I said -Werror is good for development, bad for distribution.
Agree, it's probably news to you. GCC, for instance, supports -W and -Wall, both of which together activate lots of helpful static code analysis in GCC (unused variables, weird casts, automatic sign extensions, etc.).
No problem there - when you clearly state something as personal opinion, I have no problem. It's only the assertions and blanket statements that make my blood boil. We could probably understand each other over a beer much better than over the Intertubes.
And here we go with the blanket statements again, asserted without proof. "
Sure asserted without proof.
It is based on my understanding what a better language is, by looking at the epoch documentation that I had access to, during my degree.
For me, a languages that allow the developer to focus on the task at hand are always better than those that require them to manipulate every little detail.
Usually that detail manipulation is only required in a very small percentage of the code base.
In the end I guess it is a matter of discussing ice cream brands.
So any statements you present will most likely just express your personal opinions on the matter. Oh and I have an MS in CS, so I've heard them before (including "C sux", "C rocks" and "Let's code everything in Prolog").
And I one with focus in compiler design and distributed computing.
Usually static languages with strong typing don't allow for implicit conversion among types, forcing the developers to cast in such cases.
Overflow and underflow are also considered errors, instead of being undefined like in C.
Pointers and arrays are also not compatible, unless you take the base address of the array.
Enumerations are their own type and do not convert implicitly to numeric values, like in C.
Well let me quote another section of the article.
Somehow I don't see C listed as a language that supports modules.
Sure you can do modular programming by separate compilation, but that is not the same as having language support for it. I have done it for years.
For example, in some languages that support modules, the compiler has builtin linker and is also able to check dependencies automatically and only compile required modules.
The types are also cross-checked across modules. In C,
some linkers don't complain if the extern definition and the real one don't match.
In the Fortune 500 corporate world, usually there isn't too much developer freedom.
Agree, it's probably news to you. GCC, for instance, supports -W and -Wall, both of which together activate lots of helpful static code analysis in GCC (unused variables, weird casts, automatic sign extensions, etc.). "
I am aware of it.
As I answered in another thread, I have done C programming between 1992 and 2001, with multiple compilers and operating systems.
Since then, most of the projects I work on rely on another languages.
No problem there - when you clearly state something as personal opinion, I have no problem. It's only the assertions and blanket statements that make my blood boil. We could probably understand each other over a beer much better than over the Intertubes. " [/q]
Yeah, it would be surely a better way to discuss this.
Usually that detail manipulation is only required in a very small percentage of the code base.
Essentially what you're describing is limiting freedom in what you can do in a certain language, so that a programmer has no chance of running into trouble. Sometimes it's a good thing, sometimes it's slowing you down. The lack of pointers in Java, for instance, has frequently tied my hands down, e.g. I can't pass a subarray as a zero-cost pointer to a subroutine to work on. Instead, I either have to change the interface to include an offset index, or create a copy (potentially a huge performance penalty if the operation is trivial). If I want to make sure that the callee doesn't modify it, I must copy it. In C, I'd simply make it a pointer to const.
Sure is.
Overflow and underflow are also considered errors, instead of being undefined like in C.
Pointers and arrays are also not compatible, unless you take the base address of the array.
Enumerations are their own type and do not convert implicitly to numeric values, like in C.
All of what you describe are limits on what a developer can do. At times it's sensible to limit them, sometimes it's simply throwing hurdles his or her way. For a CRM system, or a web app, fine, it's sensible not to manipulate pointers - that isn't performance critical code. But not everywhere. I have no problem with you saying 'sometimes/most often these are not necessary'. The problem is when you assert that C is essentially a stupid, pointless language and that everything else that you like is better. I know it's tough to swallow, but there are painfully few OS kernels written in Pascal, and probably for good reason.
That's merely because the author didn't follow his/her own definition. The logical structure of such an argument is:
1) If language X has feature Y, then has support for modular programming
2) Here are the languages I know of that meet criterion 1
All the author did was miss another language that clearly meets their own criteria for modular programming.
This is a compiler/build infrastructure feature, not a language feature. For instance, Sun's javac doesn't do that (just tested it), yet Java clearly fits the definition. So for all matters, this is a pointless criterion.
some linkers don't complain if the extern definition and the real one don't match.
While it is possible to have poorly written code where interface declarations are completely torn away from their own implementations, the correct practice is to #include your own interface files when making the implementation, exactly to provide this check. Again, your complaint is at least 20 years out of date (this was true in K&R C and other pre-C89 dialects which lacked proper interface declarations).
.... "
Interestingly enough, Multics, whose developers included Ritchie and Thompson, was written in PL/1 and used one of the first non-IBM compilers for the language. PL/1 was by all accounts a product of 'design by committee' and compilers for it which supported the entire language were difficult to implement and required high end computers at the time.
"Simple and Expressive"
In what way is C simple? Take someone who's got a good background and mind for programming and show them C. See how long it takes them to actually understand how to define a function to pass to qsort() and bsearch() and then get the syntax right for function pointers. Versus .sort() and .find() in any object language, or the array-style dictionaries in Smalltalk or Lua. No way is C more expressive, much less 'simple.'
"Simpler Code, Simpler Types"
Okay, that I sort of agree with. I was bouncing through the JavaFX API docs yesterday, and there sure were a lot of bizarro return types that don't make sense off hand. However, there are entire books written about how to figure out what a damn variable definition means in C. const * char is a pointer to const... I think. Right? Very simple.
"Speed King"
Okay.
"Faster Build-Run-Debug Cycles"
In what universe? I grant you, C++ takes longer to compile than straight C, normally because it has to compile anything having to do with templates all the way back up the line. But my Java compiles are every bit as fast as C compiles. Maybe faster, since Java often does a better job of figuring out what needs to recompile, vs. 'hmm, maybe I should 'make clean' just to be sure.' And the Build-Run-Debug cycle of ANY scripting language is about a hundred times faster than C. gcc --help takes longer than no compile step at all.
"Ubiquitous Debuggers and Useful Crash Dumps"
Again, I can't argue with that one. Well, C crash dumps aren't always useful, but they're there when you really need them.
"Callable from Anywhere"
Okay. Although I'm not sure there's any real advantage over C++ in this regard.
"Yes. It has Flaws"
Indeed. Of course, every language does. But some of the crap you put up with in C is really just awful. Strings may be a simple, dumb example, but damn it just about every program has to deal with strings constantly. It may be a small grain of sand, but it chafes after a while. And sure, I can get an internationalization library for C. (Or ten of them, and who knows which one is actually the good one.) But in Java it's just there.
A final thought. Would Couchbase have been written at all if it hadn't started in Erlang? Sure, when it got big and successful and they needed speed improvements they re-coded in C. But could they have gotten that far without a higher level language to start with? The whole idea of "rapid prototyping" isn't just good because it makes programmers' lives easier, but also because get up and running with something functional quickly. You have to prove a business case or convince people to join your team, and that point is a lot harder to get to in C..
This is a flawed premise. You are positing a hypothetical question and drawing conclusions without any data to show for it. In other words, what you're doing here is blind speculation.
"const *char" isn't a valid C construct. I think you meant 'const char *' which is the traditional form to write the more proper 'char const *'. That is a "character constant"-pointer, i.e. a pointer to a character-constant. Contrast with 'char * const' which is a "character"-pointer constant. This is one of the more luxurious features in C, having the ability to keep a close eye on object mutability. How do I pass a constant array in, say, Java? I can't, there's no way to do it.
The rest of your comment, I have no problem with. C is not a one-size-fits-all solution.
Fair enough. I stand by the basic premise, however. Using function pointers to pass a compare function to a sort function is more complicated than blah.sort(). Particularly if your collection has a natural sort order, like strings.
Ha! Okay, it was 4 in the morning when I wrote that, but the fact that I screwed up my example kinda proves the point.
The point of passing a compare function to a sort function is for when you want to sort things into a different order than the natural one.
Except in C, there is no way to sort anything without passing a compare function to qsort. Even an array of ints needs a hand-coded, if trivial, compare function passed to it..
Hmm... I don't see your point. Prototyping in a higher-level language doesn't take away the usefulness of C. Judging by this thread I'm hardly the only one combining C and Python in my daily work, and yes I pretty much always write new code in Python first and then translate performance hotspots to C.
That said I'm certain Couchbase could have been written directly in C without prototyping in a higher level language, I think the initial thought was to have it running in Erlang but the performance was found lacking which prompted a rewrite in C.
The Linux kernel devs wrote git straight off in C during a very fast development phase and it's been a huge success in no small part due to it's much-lauded performance, but also it's stability.
Unlike the other common programming languages of our days, C is the only one expressive enough to do everything you want to do with the machine.
For some it is a strength, for some it is a weakness, but to use C well you need to know how CPUs execute code.
If you know that, aside from syntactical quirks, using C is easy and results in small and fast programs, even in userspace.
Nowadays it is fashionable to criticize C and to say it is not good for userspace applications, but a fact remains and it is that most of the applications which make the computing world run, they are in C. Pure C.
Despite some people thinking otherwise, OOP oriented programming is much more confusing for newbies, and newbies should start learning to programming in assembly. Yes, in assembly. The only way they will understand what the language is doing when they go on to higher level languages. Theory means nothing without practice and understanding. Once you know how assembly languages work and the memory model of machines pointers are just natural.
Substandard programmers can flock to languages like C# or Java. Not that I find Java, the language, wrong in itself, it's just that it is often used by programmers who do not understand what's happening and that want to be cool by saying stuff like "it's secure, it's run by a VM, it's sandbox, it doesn't have pointers!!", well it is does have references. Which, internally, are just like pointers! It's hidden to the eye, but they are still there. Proof is the argument passing to methods in Java, if I pass a reference to an object in an argument, and if I change the value of the reference inside the function at a later moment, the value of the reference I passed to the function externally doesn't change. Just like C. It's still passing argument by copying. Most other things are just things the Java Compiler enforces for you and the JVM sets up automatically for you (like garbage collection, interfaces, etc.).
You can compile C for a virtual machine, and it will be as "safe" as Java run in a virtual machine, and also the MMU already tries to "isolate" user code from system code in order to make things safer. Most times it is sufficient, at times not.
Not type checking adds to the expressiveness of C, it doesn't make it "unsafe" if you know what you are doing. That you can copy things elsewhere without being stopped is a very useful side feature, especially in system code.
Also, C strings are not necessarily too tied to the language. You could modify a C compiler in order to output Pascal-style strings for string constants, and you could make custom string functions that handled Pascal strings if you really wanted to..
Edited 2013-01-11 09:06 UTC
Really?!
And I though Ada, Modula-2, Modula-3 and Turbo/Free Pascal, Oberon(-2) already provided everything C does and more, just to name a few comparable languages.
Security exploits everywhere, since not everyone is a top developer.
Then you throw C performance out of the window, because all C libraries assume null terminated strings and you end up converting between string types all the time.
Edited 2013-01-11 10:30 UTC
Since when is C performance depending on null-terminated string ? God, have you already programmed in C ?
Whenever i'm looking for performance, i never handle strings, but blocks of bytes.
Strings are for "higher-layers", such as sending a file name as a parameter. This kind of usage has zero impact on critical loop performance.
I am a system software developer and I have been using C for for about a decade now. The reason why I got into C was simple: Unix is developed in C and this is what you tend to use for Unix system software development.
Some people look down on C because it's old and uncool, but just because something is old does not mean it's crap. Many people avoid C because it's not an object oriented programming language, however I think you can use object oriented programming paradigm with C, as long as you are willing to fully understand how it works at the low level. With a bit of effort and discipline, I have been able to use plain C to achieve single inheritance, polymorphism and generics. It takes a bit longer to develop your code compared to Java or C++, but I would argue it makes you a better programmer when you understand the low level implementation.
So despite its age, C is a very flexible and powerful programming language, although I would prefer for Ada to be more ubiquitous.
Exactly my experience as well. But when I break it to the modern wannabe-hipster script kiddies here on OSNews, I get modded down
It's fine understanding how the OO features work on a low level. I'm not sure I'd say people *need* to know this to be decent programmers, but I'm personally fond of my background in C and assembly.
But, after learning how to do things at a low level, why would you continue writing code in that tedious and painful way, when there's tools better suited for the job?.
And yet, C is the most popular language and has been on the rise for a few years (even displaced Java as the most popular language a few years back):
So, what gives?
Demand for C has increased a few years ago due to the need for developers in the embedded market.
Otherwise I would take this link with a pinch of salt: Visual Basic 6 is almost more popular than C# and on its way to surpass it.
I think that a more interesting metric would be the languages used daily on our computers and on large applications.
Edited 2013-01-11 17:42 UTC
Otherwise I would take this link with a pinch of salt: Visual Basic 6 is almost more popular than C# and on its way to surpass it.
So what you are saying is that C isn't a good fit for *your* problem domain. No problem there, there's no silver bullet as languages go, and everybody picks their favorite language also according to personal taste.
If you are allowed to formulate arbitrary questions, you can get arbitrary answers. That doesn't mean that they apply in the real world, though.
As for me, personally, I don't care if C is the least or most used computer language in the world. I still use the right tool for the job, be it C or something else.
So, are you claiming that is other programming language where the future OSes are being written? Something that the new guys can use instead of the old-good-for-nothing C?
Interesting!..
C is nice, and has worn remarkably well, but if I were making the choice between those two for a new project, C++ would normally be the winner.
This may be sacrilege to some, but considered strictly as a technical book, I think Stroustrup's "The C++ Programming Language" is better than K&R's "The C Programming Language" as well.
Edited 2013-01-11 17:38 UTC
Let's open the free buffet :
Kochise.
I wonder if this gentlemen has tried Go? If he is a big C fan, Go might be a great choice for the stuff that doesn't need to be written in C. Great qoute: “Go is like a better C, from the guys that didn’t bring you C++” — Ikai Lan. Not only is Go a safer version of C with lots of great modern features, it integrates very well with C via cGo.
If memory usage and speed are your preeminent concerns, C is certainly a force to be reckoned with:...
Could you please elaborate on your comment?
D is a systems programming language (like C or C++) but with a very high level of abstraction.
Edited 2013-01-12 04:43 UTC
.
C might have a limited domain now, but you are forgetting when it was written is was very much a general purpose language. In the same way Go is.
Go might not have these qualities compared to C, but it does have many of these qualities compared to modern applications programming languages. Outside of writing a OS, Go's problem domain is a super-set of C as used in modern times.
I think you don't understand what makes Go fast to compile:...
Furthermore, Go tip (1.1 in dev) has many many more optimizations than Go 1.0.3 and the compiler is 30% faster yet.....
...I was responding to someone who was comparing Go and D... I don't think D is particularly relevant to most developers._9<<.
Well if you had framed it as such then I would have had no problem with your claim, although I would still find it odd to compare Go with C's much more widespread usage in the 70/80's as opposed to the areas it mainly occupies today.
Ah, my bad, sorry.
and yet C is indeed stuck in the past, with C99 being the latest version i know of.
I guess the culprit is C++, which is supposed to be, well, C plus other things. So probably a lot of people will answer just that : newest versions of C are "embedded" into C++ evolution.
I feel pity for C. That basically means there is no more any evolution to wait for C, all this due to a "name grabbing effort" by the C++ team. For people who want the predictability of C, there has to be another route than C++....
Why is a trabant better than a ferrari?
Simple!! If doesn't allow you to drive with 300km/h against a tree. With the "top speed" of a trabant being around 70-100 km/h, you still have chances to survive if you made an accdient. And it costs you a lot less to repair it.
Guys really, with c, there is a lot of stuff you simple can't do, and which would force you to write a lot more code instead of automatic code generation.
And would say about D programming language?
But after some three and half decades of programming - much of it with C and C syntax languages - C (and most every language based on it) PISSES ME OFF. Needlessly cryptic, pointlessly convoluted, and seemingly intentionally designed to make 100% certain you are going to make coding mistakes, I would rather hand assemble 8k of Z80 machine language, than deal with trying to find a bug in 100 lines of C code.
The ONLY reason I put up with it is that generally speaking it's what you are forced into using by compiler availability, support, and what's expected of you in the workplace. It's easy to blame the lemmings at the rear and front -- since most of us writing software are stuck in the middle and can't see where you're going and can't stop for fear of getting trampled.
I often think C and every language based on its syntax from C++ to Java to PHP, exists for the sole purpose of making programming hard. They are certainly a far cry from the elegance of languages like Pascal or the simplicity of assembly... To be frank, I thought there were two core reasons for higher level languages -- portability - which is a joke when you're still that close to the hardware, and being simpler than machine language - which it most certainly is NOT! It gets far worse when you look at objects in most any C derivative language since they seem to be just shoehorned in any old way!
Even sadder are all these 'newer' languages that are even more needlessly cryptic and difficult to decipher like Python, Ruby, or lord help you Rust... Rust, the language for people who think C is a bit to clean and verbose -- which is akin to saying the Puritans who went to Boston in the 17th century did so because the CoE was a little too warm and fuzzy for their tastes... or founding your own extremist terrorist group because Hezbollah was a bit too warm and fuzzy. There's this noodle-doodle idiotic concept right now that typing a bunch of symbols and abbreviations most people could never remember in a hundred years is 'simpler' than using whole words -- and the quality of code has gone down the toilet thanks to it... Such idiocy explaining why people will piss away bandwidth and code clarity on halfwit rubbish frameworks like jQuery.
It's enough to make you think the old joke... isn't a joke.
Edited 2013-01-12 23:40 UTC
I should like Python -- I really should given what a stickler I am for clear consistent formatting...
But to be brutally frank, it's more cryptic than C in it's logic structures. Take the example up above by funkyelf -- both the C and the Python versions make me want to punch someone in the face due to their lack of clarity.
But again, I worship at the throne of all things Wirth so...
I LIKE the forced formatting of Python -- I DISLIKE the unclear control structures and lack of verbose ending elements... and the needlessly short/cryptic methodology and naming conventions. By the time you get into iterators and generators, it's a needlessly convoluted mess that honestly, I have a hard time making any sense out of.
I dunno, maybe this dog is getting too old for new tricks -- but I cry for anyone trying to use python to learn with -- which is part of why I don't get why the Pi folks and many educators have such a raging chodo for it. It's the LAST thing I'd consider using to teach people to program... It's another of those languages so complex IMHO you'd be better off just sucking it up and coding machine language directly. I really don't get these high level languages that make assembly look simple.
Edited 2013-01-13 09:37 UTC
With iterators and generators, you have to understand that it's almost a different "paradigm". I think the root of your problem with Python may more be the fact that it's not a purely imperative language? I'm currently biting the Common Lisp bullet, and the Lisps have the same kind of ability.
Like C++, most programs don't need advanced Python features like iterators or generators anyway, but once you get used to a more declarative style of programming, it becomes a lot easier. That usually involves writing a few Python list comprehensions.
The thing that makes Python a good teaching language is that the basics of programming in Python is a lot easier to understand than C and its descendants. Yes, there are complicated advanced things, but in terms of the basic stuff, Python is easier to teach.
I too think the productivity aspect of C is overrated by the author.
I think what the author fails to consider is when your programming language of choice is "fast enough". Also, too many times, I have seen algorithms to solve problems using brute force approaches isntead of choosing smarter algorithms. I don't care if you write machine code, if you're using a dumb algorithm, your program's runtime efficiency will pay for it. I think that C's reputation as a "high-level assembly" actually makes people less likely to truly think about algorithm design, and instead rely on tricks of the compiler. In other words, C's reputation as a fast language is actually detrimental to good algorithm design.
Many people have been addressing C's lack of OO as a good thing, but I will say that C's lack of functional style programming is a bad thing. Having had to torture my brain to grok functional programming languages, I think that people just don't think easily in it. Recursion tends to hurt people's brains, and generally, people come up with iterative solutions instead. Also, the immutability of variables throws people for a loop (no pun intended from above). Unfortunately, C is horrible from a functional perspective. Start writing functions in a map-reduce style, using first class functions for filtering, having the ability to generate functions dynamically, or using to closures to encapsulate data....and you will truly miss it when you have to write things in an imperative style. | http://www.osnews.com/comments/26689 | CC-MAIN-2016-22 | refinedweb | 8,836 | 63.39 |
This post is a followup of to initiate a discussion whether whitebox def macros should be included in an upcoming SIP proposal on macros. Please read the blog post for context.
Whitebox macros are similar to blackbox def macros with the distinction that the result type of whitebox def macros can be refined at each call-site. The ability to refine the result types opens up many applications including
To give an example of how blackbox and whitebox macros differ, imagine that we wish to implement a macro to convert case classes into tuples.
import scala.macros._
object CaseClass {
def toTuple[T](e: T): Product = macro { ??? }
case class User(name: String, age: Int)
// if blackbox: expected (String, Int), got Product
// if whitebox: OK
val user: (String, Int) = CaseClass.toTuple(User("Jane", 30))
}
As you can see from this example, whitebox macros are more powerful than blackbox def macros.
A whitebox macro that declares its result type as Any can have it’s result type refined to any precise type in the Scala typing lattice. This powerful capability opens up questions. For example, do implicit whitebox def macros always need to be expanded in order be disqualified as a candidate during implicit search?
Any
Quoting Eugene Burmako from SIP-29 on inline/meta, which contains a detailed analysis on “Loosing whiteboxity”
The main motivation for getting rid of whitebox expansion is simplification -
both of the macro expansion pipeline and the typechecker. Currently, they
are inseparably intertwined, complicating both compiler evolution and tool
support.
The main motivation for getting rid of whitebox expansion is simplification -
both of the macro expansion pipeline and the typechecker. Currently, they
are inseparably intertwined, complicating both compiler evolution and tool
support.
Note, however, that the portable design of macros v3 (presented in) should in theory make it possible to infer the correct result types for whitebox macros in IDEs such as IntelliJ.
Quoting the minutes from the Scala Center Advisory Board:.
Adriaan Moors, the Scala compiler team lead at Lightbend agreed with Martin, and mentioned a current collaboration with Miles Sabin to improve scalac so that Shapeless and other libraries can rely less on macros and other nonstandard techniques
What do you think, should whitebox def macros be included in the macros v3 SIP proposal? In particular, please try to answer the following questions
Thanks a lot to Ólafur, Eugene and the Scala Center in general for setting up such a thorough and transparent process.
Here are my personal thoughts, as an extensive user of macros:
The first use-case for whitebox macros that comes to mind is of course quasiquotes, because we often want what is quoted to influence the typing of the resulting expression. This is invaluable when one wants to design type-safe quasiquote-based interfaces. For example, see the Contextual library. Haskell has similar capabilities thanks to Template Haskell.
This extends the point above, but it goes much further.
We have been working on Squid, an experimental type-safe metaprogramming framework that makes use of quasiquotes as its primary code manipulation tool. Squid quasiquotes are statically-typed and hygienic. For example { import Math.pow; code"pow(0.5,3)" } has type Code[Double] and is equivalent to code"_root_.Math.pow(0.5,3)".
{ import Math.pow; code"pow(0.5,3)" }
Code[Double]
code"_root_.Math.pow(0.5,3)"
(You can read more about Squid Code quasiquotes in our upcoming Scala Symposium paper: Type-Safe, Hygienic, and Reusable Quasiquotes.)
Code
The main reasons for using whitebox quasiquote macros here are:
to enable pattern matching: we have an alternative code{pow(0.5,3)} syntax that could be a blackbox, but it doesn’t work in patterns (while the quasiquoted form works); making patterns more flexible might be a way to solve this particular point;
code{pow(0.5,3)}
to enable type-parametric matching: one can write things like pgrm.rewrite{ case code"Some[$t]($x).get" => x }. This works thanks to some type trickery, namely it generates a local module t that has a type member t.Typ, and types the pattern code using that type, extracting an x variable of type Code[t.Typ]. This is somewhat similar to the type providers pattern. The rewrite call itself is also a macro that, among other things, makes sure that rewritings are type-preserving.
pgrm.rewrite{ case code"Some[$t]($x).get" => x }
t
t.Typ
x
Code[t.Typ]
rewrite
to enable extending Scala’s type system: we have alternative ir quotation mechanism that is contextual in the sense that quoted term types have an additional context parameter. This (contravariant) type parameter expresses the term’s context dependencies/requirements. Term val q = ir"(?x:Int).toDouble" introduces a free variable x and thus has type IR[Double,{val x:Int}] where the second type argument expresses the context requirement. (IR stands for Intermediate Representation.) Expression code"(x:Int) => $q + 1" had type IR[Int => Double,{}] because the free variable x in q was captured (this is determined statically). That term can then be safely be ran (using its .run method, which requires an implicit proving that the context is empty C =:= {}). Thus we “piggyback” on Scala’s type checker in a modular way to provide our own user-friendly safety checking that would be very hard to express using vanilla Scala.
ir
val q = ir"(?x:Int).toDouble"
IR[Double,{val x:Int}]
code"(x:Int) => $q + 1"
IR[Int => Double,{}]
q
.run
C =:= {}
As you have guessed, this relies on invoking the compiler from within the quasiquote macro. I understand that this is technically tricky and makes type-checking “inseparably intertwined” with macro expansion, but on the other hand that’s also an enormous advantage. If it’s possible to sanitize the interface between macros and type-checkers, that would give Scala a very unique capability that puts it in a league of its own in terms of expressivity –– basically, the capability to have an extensible type system.
Could Squid’s quasiquotes be made a compiler plugin? Probably, though I’m not knowledgeable enough to answer with certainty, and I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker.
As an aside, in Squid we also came up with the “object algebra interface” way to make language constructs expressed in the quasiquotes independent from the actual intermediate representation of code used. This seems similar to the way the new macros are intended to work –– the main difference being that we support only expressions (not class/method definitions).
Dynamic
I think the usage of the Dynamic trait becomes extremely limited (from a type-safe programming point of view) if we don’t have a way to refine the types of the generated code based on the strings that are passed to its methods selectDynamic & co. (doing so is apparently even known as the “poor man’s type system”).
selectDynamic
If that is possible to do in a sane way, I could not recommend going with that possibility enough!
Thank you for your detailed response @LPTK
In Squid, do you rely on fundep materialization? There may be a design space between blackbox and whitebox def macros that supports refined result types but not fundep materialization.
I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker.
I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker.
I suspect so too, we face the same challenges designing a macro system that works reliably across different compilers
The Dynamic trait
The Dynamic trait
That is a good observation. I am not sure how common this technique is. I have contacted the author of scalikejdbc to share how they use selectDynamic with whitebox def macros.
Also, not sure how may impact this.
the capability to have an extensible type system.
the capability to have an extensible type system.
Note that this may not necessarily be a desirable capability. Some whitebox def macros are so powerful they can be used to turn Scala into another language!
I’d like a way for whitebox macros authors to be able (although not neccesarily obliged) to separate the part of the macro that computes the return type from the part that computes the expanded term. Let’s call the first part “signature macros”.
For implicit macros, this would lend itself to more efficient typechecking. Even for non-implicit macros, an IDEs could be more efficient if they could just run the “signature macro”.
I think that this separation also will help to shine a light on whether the full Scala language is the right language for signature macros, or if a more restrictive language could express a broad set of use cases of whitebox macros.
I suppose the contract would be that if the signature macro returned a type and no errors, the corresponding term expansion macro would be required to succeed and to conform to the computed return type.
Obviously a naive implementation of the signature macro is to just run the term macro and typecheck it, as per the status quo. I think we should aim higher than that, though!
Ryan Culpepper recently suggested essentially the same thing that you call “signature macros” two weeks ago! …Really glad to hear this suggestion; means that at least a subset of us are thinking along the same lines
cc/ @olafurpg
Not currently. We had a prototype system that perhaps did something like that (not sure): it was a system for statically generating evidence that structural types did not contain certain names or were disjoint in terms of field names. For example, you could write def foo[A,B](implicit dis: A <> B) meaning that A and B are structural types that share no field names. You could then call foo[{val x:Int},{val y:Double}] but not foo[{val x:Int},{val x:Double}]. When extendind an abstract context C as in C{val x:Int}, the contextual quasiquote macro would look for an evidence that C <> {def x} to ensure soundness in the face of name clashes.
However, instead of porting that old prototype to the current system, we’re probably going to move to a more modular solution, which shouldn’t need any implicit macros.
def foo[A,B](implicit dis: A <> B)
A
B
foo[{val x:Int},{val y:Double}]
foo[{val x:Int},{val x:Double}]
C
C{val x:Int}
C <> {def x}
There is one particularly nasty thing that a Squid implicit macro currently does: it looks inside the current scope to see if it can find some type representation evidence. This allows us to use an extracted type t implicitly as in case ir"Some[$t]($x) => ... implicitly[t.Typ] ... instead of having to write case ir"Some[$t]($x) => implicit val t_ = t; ... implicitly[t.Typ] .... I understand this is probably asking macros for too much, and I think we could do without it (though it may degrade the user experience a little).
case ir"Some[$t]($x) => ... implicitly[t.Typ] ...
case ir"Some[$t]($x) => implicit val t_ = t; ... implicitly[t.Typ] ...
About Dynamic, one of the things I’ve used it for was to automatically redirect method calls to some wrapped object (cf. composition vs inheritance style).
Yeah, it’s a judgement call. IMHO Scala is already a language that lets you define a myriad different sub-languages thanks to its flexible syntax and expressive type system. I think that’s one thing many people like about the language (cf., for example, the vast ecosystem of SQL/data analytic libraries that define their own custom syntaxes and semantics).
Sounds like the most natural way to do it would be to just have type macros. Then whitebox macros are just blackbox macros with a return type that is a macro invocation.
def myWhitebox[A](a: A, str: String): MyReturn[A, str.type] = macro ...
type MyReturn[A, S <: String with Singleton] = macro ...
It’s a nice separation of concerns. But I’m afraid there are a lot of whitebox macros in the wild where both code generation and type refinement are very much intertwined, because they’re semantically inseparable. In the case of Squid, what I’d do is to parametrize the current macro to either just compute a type or do the full code generation; but that would mean a lot of computation would be duplicated (I would have to parse, transform, typecheck and analyse the quasiquote string in both type signature and code-gen macro invocations), and batch compile times would be strictly worse.
To add onto what @LPTK wrote, I’d speculate that there are very few whitebox macros for which the signature macro could be easily separated from the term macro without a lot of code duplication and/or redundant work. An alternative approach may be to conflate the signature macro and the term macro. The macro expansion could return a tuple of (List[c.Type], c.Tree) where the list of types must contain exactly as many types as there are method type arguments. For example, suppose that I want to implement the CaseClass.toTuple[T] method from above.
(List[c.Type], c.Tree)
CaseClass.toTuple[T]
object CaseClass {
def toTuple[C, T](cls: C): T = macro CaseClassMacros.impl[C, T]
}
class CaseClassMacros(val c: Context) {
import c.universe._
def impl[C: c.WeakTypeTag, T](cls: c.Expr[C]): (List[c.Type], c.Tree) = {
...
val tree = q"""..."""
val tType: c.Type = ???
val resultTypes = List(weakTypeOf[C], tType)
(resultTypes, tree)
}
}
The typechecking of the returned tree could be deferred until after the compiler has verified that result types are valid. There would be no need to re-expand the macro using the result types since the type T is a functional dependency of C.
While this is less conceptually elegant than having independent signature and term macros, I think that it would be more practical for macro authors.
When thinking about macros I have found it useful to consider two dimensions:
First dimension: What is the expressive power of the macro language?
Second dimension. When should this power be available?
Scala with whitebox macros is currently at the extreme point (3, 3) of the matrix. This is IMO is a very problematic point to be on. Having the full power of the underlying language at your disposal means your editor can (1) crash, (2) become unresponsive, or (3) pose a security risk, just because some part of your program is accessing a bad macro in a library. That’s not hypothetical. I still remember the very helpful(?) Play schema validation macro that caused all IDEs to freeze.
Scala with blackbox macros is at (3, 2). This is slightly better as only building but not editing is affected by bad macros and you can do a better job of isolating and diagnosing problems. But it still would make desirable tools such as a compile server highly problematic because of security concerns.
If we take other languages as comparisons they tend to be more conservative. Template Haskell lets you do lots of stuff, but it is its own language. I believe that was a smart decision of the Haskell designers. Meta OCaml is blackbox only and does not have any sort of inspection, so it’s essentially compile-time staging and nothing else.
So, if Scala continued to have whitebox macros it would indeed be far more powerful than any other language. Is that good or bad? Depends on where you come from and what you want to do, for sure. But I will be firmly in the “it would be very bad” camp. In the future, I want to concentrate on making Scala a better language, with better tooling, as opposed to a more powerful toolbox in which people can write their own language . There’s nothing wrong with toolboxes, but it’s not a primary goal of Scala as I see it.
Given this dilemma, maybe there’s no single solution that satisfies all concerns. That was the original motivation of the inline/meta proposal in SIP 29: Have only inlining available as a standard part of the language. Inlining does a core part of macro expansion (arguably, the hardest part to implement correctly). Then build on that using meta blocks that are enabled by a special compiler mode or a compiler plugin. If we have only blackbox macros the plugin can be a standard one which simply runs after typer. With whitebox macros the “plugin” would in fact have to replace the typer, which is much more problematic. I believe it would in effect mean we define a separate language, similar to Template Haskell. That’s possible, but I believe we need then to be upfront about this.
One thing to add to my previous comment: Some form of type macros (or, as @retronym calls them, signature macros) might be a good replacement for unfettered whitebox macros. Dotty’s inline essentially does two things:
inline
In the type language, we already have beta-reduction. If
type F[X] = G[X]
then F[String] is known to be the same as G[String]. If we add some form of condiional, we might already have enough to express what we want, and we would stay in the same envelope of expressive power.
F[String]
G[String]
To get into the same ballpark in terms of expressiveness, I think you’ll also need some form of recursion purely at the type level, which is not currently possible:
type Fix[A[_]] = A[Fix[A]]
illegal cyclic reference: alias [A <: [_$2] => Any] => A[Fix[A]] of type Fix refers back to the type itself
Wouldn’t supporting this potentially break the type system pretty badly?
A minor nitpick:
Actually, MetaOCaml is not related to macros. It’s essentially for generating and compiling code at runtime (traditional multi-stage programming) –– though it’s true that the approach was ported to compile-time with systems such as MacroML, or more recently modular macros.
@LPTK Yes, we’d have to add some form of recursion to type definitions, with the usual complications to ensure termination.
You are right about Meta OCaml. I meant OCaml Macros:
For implicit macros, this would lend itself to more efficient typechecking.
For implicit macros, this would lend itself to more efficient typechecking.
Indeed. But, furthermore we have by now decided that every implicit def needs to come with a declared return type. This restriction is necessary to avoid puzzling implicit failures due to cyclic references. So, it seems whatever is decided for whitebox macros, implicit definitions in the future cannot be whitebox macros.
We use whitebox macros to compile db queries and return query result as typed rows, i.e. db query string also serves as a class definition.
For example:
scala> tresql"emp[ename = ‘CLARK’] {ename, hiredate}".map(row => row.ename + " hired " + row.hiredate) foreach println
select ename, hiredate from emp where ename = 'CLARK’
CLARK hired 1981-06-09
Is there a way to achieve this without whitebox macros?
We use whitebox macros to do symbolic computation (using a Java library called Symja) at compile-time.
As we have no idea what the final function/formula is going to look like, we cannot define a fixed return type.
I’d also be interested if there’s a way to do this without whitebox macros. | https://contributors.scala-lang.org/t/whitebox-def-macros/1210 | CC-MAIN-2017-43 | refinedweb | 3,255 | 62.17 |
JSON++
JSON++ is a self contained Flex/Bison JSON parser for C++11. It parses strings and files in JSON format, and builds an in-memory tree representing the JSON structure. JSON objects are mapped to
std::maps, arrays to
std::vectors, JSON native types are mapped onto C++ native types. The library also includes printing on streams. Classes exploit move semantics to avoid copying parsed structures around. It doesn't require any additional library (not even
libfl).
Git repository
A version of this repository (regularly mirrored) is available on GitHub.
Contributors
JSON++ is not a personal project anymore, people is constantly writing to me, and sending pull requests to improve it and make it better. I'd like to thank these people by adding them to this Contributors section (in order of contribution).
Thanks for your effort fellas.
Usage
#include <iostream> #include "json.hh" using namespace std; using namespace JSON; int main(int argc, char** argv) { // Read JSON from a string Value v = parse_string(<your_json_string>); cout << v << endl; // Read JSON from a file v = parse_file("<your_json_file>.json"); cout << v << endl; // Or build the object manually Object obj; obj["foo"] = true; obj["bar"] = 3; Object o; o["given_name"] = "John"; o["family_name"] = "Boags"; obj["baz"] = o; Array a; a.push_back(true); a.push_back("asia"); a.push_back("europe"); a.push_back(55); obj["test"] = a; cout << o << endl; return 0; }
How to build JSON++
The project includes a
CMakeLists.txt files which allows you to generate build files for most build systems. Just run
cmake .
and then
make
The project generates
- json.tab.hh,
- json.tab.cc, and
- lex.yy.cc
files from
json.l and
json.y, then compiles them (and a few other files) into a
libjson library, which is finally used to link the
test executable. You can use the library in your projects, or use the Flex/Bison files straight away.
How to build with unit tests
If you have the cppunit framework () installed on your system, you can make a build with unit tests as follows:
mkdir build cd build cmake .. -DWITH_UNIT_TESTS=ON make ctest -V
The usage of an out of source build is strongly advised, since even more files are generated by the CTest testing tool.
How to build for measuring code coverage
Specify the
Coverage build type as follows:
cd build cmake .. -DCMAKE_BUILD_TYPE=Coverage
You can get a code coverage report with
gcovr ():
cd build gcovr --xml --root .. --exclude "ut/.*" --exclude "test.cc" > coverage.xml
This produces a report in the XML file format, which can be visualized with tools such as the Cobertura plugin for the jenkins continuous integration server.
How to generate API documentation
If you have
doxygen () installed on your system, an API documentation will be generated automatically as part of
make. You can also request its generation explicitly:
make doc
You will find the documentation in your build directory at
./html/index.html.
Flex/Bison quirks when using C++ classes
This section is for the ones who got here because they're trying to build stuff with Flex/Bison and C++. This was my first Flex/Bison parser (the main motivation behind its development being that I didn't find a parser for JSON in C++ which didn't require a number of extra libraries, plus I wanted to learn Flex/Bison).
So, for the ones venturing in this world, here's a few things I wish I knew when I set off to write the parser.
- Every rule of the Bison grammar has a left-hand side, to which the parsed objects (no matter their type), must be assigned. To do this, a
unionis used. Bison uses the
%union { ... }rule to declare the types inside the union, which must only contain native C types or pointers to C++ classes,
- in case pointers to C++ classes are used in
%union, classes extending
stdcontainers won't work, so you'll need to wrap
stdstuff in your own classes,
- always put a starting rule in the grammar to assign the result of the overall parse to a variable, e.g.,
json: value { $$ = $1; },
- as a general rule, functions requiring Flex functions, e.g.,
yy_scan_string, etc., should be defined in the
.lfile, and their prototypes put in the
.yfile as well, so that they can be called from the parser's functions,
- ... (to be continued as I find out more).
Licensing
This code is distributed under the very permissive MIT License but, if you use it, you might consider referring to the repository. | https://bitbucket.org/tunnuz/json/src | CC-MAIN-2016-30 | refinedweb | 749 | 63.19 |
8 Queens Problem using Backtracking
Reading time: 30 minutes | Coding time: 10 minutes
You are given an 8x8 chessboard, find a way to place 8 queens such that no queen can attack any other queen on the chessboard. A queen can only be attacked if it lies on the same row, or same column, or the same diagonal of any other queen. Print all the possible configurations.
To solve this problem, we will make use of the Backtracking algorithm. The backtracking algorithm, in general checks all possible configurations and test whether the required result is obtained or not. For thr given problem, we will explore all possible positions the queens can be relatively placed at. The solution will be correct when the number of placed queens = 8.
The time complexity of this approach is O(N!).
Input Format - the number 8, which does not need to be read, but we will take an input number for the sake of generalization of the algorithm to an NxN chessboard.
Output Format - all matrices that constitute the possible solutions will contain the numbers 0(for empty cell) and 1(for a cell where queen is placed). Hence, the output is a set of binary matrices.
Visualisation from a 4x4 chessboard solution :
In this configuration, we place 2 queens in the first iteration and see that checking by placing further queens is not required as we will not get a solution in this path. Note that in this configuration, all places in the third rows can be attacked.
As the above combination was not possible, we will go back and go for the next iteration. This means we will change the position of the second queen.
In this, we found a solution.
Now let's take a look at the backtracking algorithm and see how it works:
The idea is to place the queens one after the other in columns, and check if previously placed queens cannot attack the current queen we're about to place.
If we find such a row, we return true and put the row and column as part of the solution matrix. If such a column does not exist, we return false and backtrack*
Pseudocode
START 1. begin from the leftmost column 2. if all the queens are placed, return true/ print configuration 3. check for all rows in the current column a) if queen placed safely, mark row and column; and recursively check if we approach in the current configuration, do we obtain a solution or not b) if placing yields a solution, return true c) if placing does not yield a solution, unmark and try other rows 4. if all rows tried and solution not obtained, return false and backtrack END
Implementation
Implementaion of the above backtracking algorithm :
#include <bits/stdc++.h> using namespace std; int board[8][8]; // you can pick any matrix size you want bool isPossible(int n,int row,int col){ // check whether // placing queen possible or not // Same Column for(int i=row-1;i>=0;i--){ if(board[i][col] == 1){ return false; } } //Upper Left Diagonal for(int i=row-1,j=col-1;i>=0 && j>=0 ; i--,j--){ if(board[i][j] ==1){ return false; } } // Upper Right Diagonal for(int i=row-1,j=col+1;i>=0 && j<n ; i--,j++){ if(board[i][j] == 1){ return false; } } return true; } void nQueenHelper(int n,int row){ if(row==n){ // We have reached some solution. // Print the board matrix // return for(int i=0;i<n;i++){ for(int j=0;j<n;j++){ cout << board[i][j] << " "; } } cout<<endl; return; } // Place at all possible positions and move to smaller problem for(int j=0;j<n;j++){ if(isPossible(n,row,j)){ // if no attack, proceed board[row][j] = 1; // mark row, column with 1 nQueenHelper(n,row+1); // call function to continue // further } board[row][j] = 0; // unmark to backtrack } return; } void placeNQueens(int n){ memset(board,0,8*8*sizeof(int)); // allocate 8*8 memory // and initialize all // cells with zeroes nQueenHelper(n,0); // call the backtracking function // and print solutions } int main(){ int n; cin>>n; // could use a default 8 as well placeNQueens(n); return 0; }
Output ( for n = 4): 1 indicates placement of queens
0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0
Explanation of the above code solution:
These are two possible solutions from the entire solution set for the 8 queen problem.
main() { call placeNQueens(8), placeNQueens(){ call nQueenHelper(8,0){ row = 0 if(row==n) // won't execute as 0 != 8 for(int j=0; j<8; j++){ { if(isPossible==true) { board[0][0] = 1 // board[row][0] = 1 call nQueenHelper(8,row+1) // recur for all rows further print matrix when row = 8 if solution obtained and (row==n) condition is met } board[0][0] = 0 // backtrack and try for // different configurations } } } }
for example, the following configuration won't be displayed
Time Complexity Analysis
- the isPossible method takes O(n) time
- for each invocation of loop in nQueenHelper, it runs for O(n) time
- the isPossible condition is present in the loop and also calls nQueenHelper which is recursive
adding this up, the recurrence relation is:
T(n) = O(n^2) + n * T(n-1)
solving the above recurrence by iteration or recursion tree,
the time complexity of the nQueen problem is = O(N!)
Question :-
You are given an NxN maze with a rat placed at (0,0). Find and print all the paths that the rat can follow to reach its destination i.e (N-1,N-1). The rat can move in all four directions (left,right,up,down).
Value of every cell will be either 0 or 1. 0 represents a blocked cell, the rat cannot move through it, though 1 is an unblocked cell.
Solve the problem using backtracking algorithm,
Input - an integer N and a binary maze of size NxN, showing blocked and
unblocked cells.
Output - all possible path matrices the rat can travel from (0,0) to (N-1,N-1). | https://iq.opengenus.org/8-queens-problem-backtracking/ | CC-MAIN-2020-29 | refinedweb | 1,015 | 53.65 |
React Practice Components
This assumes you’ve gone through my Test Drive React (Hello World) tutorial. If you haven’t yet, head over there (it only takes about 3 minutes) and come back once you have a running app.
Ideas for Practice
You’ll get better at React like you get better at anything – with practice. Except it can be a little daunting to figure out what to write. Once you’ve got that sweet Hello World under your belt, should you move on to learning Redux? Setup a build with Webpack? Write some tests?
Nope.
Before you tackle the rest of the React ecosystem (and you’ll get there soon enough), you want to have a really solid understanding of core React concepts and APIs. There aren’t many of them. But it’ll save you a ton of time later on if you know how to “think in components.”
So let’s write some components!
Is this a walkthrough?
I want you to learn this stuff, so I’m not going to provide a ton of guidance here. But I will try to provide these practice suggestions in a way that builds in complexity, so you’re not immediately over your head. If the first few seem too easy, skip on down the list a bit.
Where do I write the code?
Use the Hello World tutorial as as starting point. Put everything in one file for now. The bottom (
export default ...) component is the “root” one. Ideally its
render method will include only a single component.
0. Hello World
Hopefully you already did this one
1. Nested components: Simple Card
This is a common pattern seen around the web. Facebook messages, Tweets, and so on:
Start off with a simpler version:
Create a component for every rectangle, and figure out what props each one needs. It’s often easiest to start at the “leaf” nodes and work your way up – build the simplest, innermost elements first.
Title- This is a simple text display. Takes as a prop the
textto display.
Description- Another simple text display. Takes as a prop the
textto display.
Image- A single image. Takes as a prop the
urlof the image.
SimpleCard- the wrapper component. Takes
itemas a prop.
itemhas a string
title, a string
descriptionand a string
imageUrl.
Once you’re done, the root component’s render function should return a
<SimpleCard item={item}/>.
SimpleCard’s render function should return something that looks like this:
<div className='simple-card'> <Image url={this.props.item.imageUrl}/> <Title text={this.props.item.title}/> <Description text={this.props.item.description}/> </div>
Maybe you want to wrap
Title and
Description in a div so that you can style it appropriately. It’s up to you.
2. Nested components: Facebook Comment Card
Follow the same pattern, and build the slightly more complicated Facebook comment card:
Draw rectangles around every element before you start building, either in your head, or print it out if you have to.
I’d create a component for the “2 seconds ago” item because I could see that being used all over the place. “Name . Location” could be a single component, or you could nest it further.
If you’re unsure, err on the side of “stupidly small components” rather than big, complicated render methods. This is what makes your code easier to reason about, and if you’re still on the fence about mixing templates and view logic, it definitely helps to keep the “template” part as small as you can.
3. Lists / Arrays
If you come from Angular, you’re used to
ng-repeat to create lists. React doesn’t have custom syntax for this, it’s just regular Javascript.
Inside the
render method you can use
map on your array to turn it into an array of components (assuming this component was passed a prop
items={an array}:
function render() { return ( <ul> {this.props.items.map(function(item) { return <li key={item.id}>{item.name}</li> })} </ul> ); }
keyprop passed to the
li? Any time you create an array of child components, React needs each one to have a unique key so that it can make its DOM diffing algorithm work. You can read more here. React will warn you (in the console) if you forget this.
Try creating a list of Simple Cards, or Facebook Comment cards, using the components you created above.
4. Composing children: Table component
Angular has this concept of “transclusion” that lets you pass content into a directive. It comes with a lot of gotchas, and it’s pretty limited – you can’t easily split up the children, find specific children, or anything like that without writing some complex hard-to-debug code.
React has a much nicer solution. Any components nested inside another are passed via the
children prop, and the parent component can decide what to do with the children.
var Container = React.createClass({ render: function() { return ( <div className='container'> <div className='row'> Here are the children: {this.props.children} </div> </div> ); } }); var Demo = React.createClass({ render: function() { return ( <Container> <ul> <li>Joe</li> <li>Mary</li> <li>Jane</li> </ul> </Container> ); } });
Try using the
children prop to create a Table component that accepts a series of Row components as children, and renders them inside a table structure:
<table> <tbody> <!-- children go here --> </tbody> </table>
Make sure your Row components return valid HTML elements that can nest inside a
tbody (e.g. Row should return a
tr). Open up the Elements tab in the Chrome DevTools (or equivalent) and look at the DOM structure that’s generated by your Table component.
Enough for now?
Hopefully these exercises are helpful. You can take them as far as you’d like – once you’ve got the basics down, try recreating the structure of Slack or Twitter or some other large site with a lot of nested components.
Don’t worry about routing, interactions, and all that at first – just take some static data and render it. This is a great way to learn core React without getting bogged down in all the details. Have fun! | https://daveceddia.com/react-practice-components/ | CC-MAIN-2019-35 | refinedweb | 1,018 | 66.13 |
First time here? Check out the FAQ!
This was a problem with having using namespace cv in a a header, and then including windows headers. This pulled cv::ACCESS_MASK into global scope where it clashed with the already global Windows-defined ACCESS_MASK.
using namespace cv
cv::ACCESS_MASK
ACCESS_MASK
I've fixed this and submitted a PR:
Something like Boost::Python may be a good place to start, if you're interested.
"If most of your detected edge pixels are on the line you seek, you could do a cv::lineFit"
I really wouldn't advise this in this case, it's very non-robust to outliers (this robustness is the whole point of using a voting scheme like Hough lines)
How can I write my own python wrapper function for a C++ function, which does the automatic numpy->cv::Mat conversion for its arguments in the same way as the OpenCV python wrappers do?
As near as I can tell, the (generated) OpenCV python wrappers use pyopencv_to and pyopencv_from from modules/python/cv2.cpp. Should I just copy these (with appropriate licensing)? Or is there a library I can link against which provides such helper functions?
pyopencv_to
pyopencv_from
modules/python/cv2.cpp | https://answers.opencv.org/users/860/leszek/?sort=recent | CC-MAIN-2019-51 | refinedweb | 202 | 61.26 |
FCLOSE(3) Library Routines FCLOSE(3)
fclose - close a stream
#include <stdio.h> int fclose (FILE *stream);
The fclose function dissociates the named stream from its underlying file or set of functions. If the stream was being used for output, any buffered data is written first, using fflush(3).
Upon successful completion 0 is returned. Otherwise, EOF is returned and the global variable errno is set to indicate the error. In either case no further access to the stream is possible.
EBADF The argument stream is not an open stream. The fclose function may also fail and set errno for any of the errors specified for the routines close(2) or fflush(3).
close(2), fflush(3), fopen(3), setbuf(3)
The fclose function conforms to ANSI/C. GNO 15 September 1997 FCLOSE(3) | http://www.gno.org/gno/man/man3/fclose.3.html | CC-MAIN-2017-43 | refinedweb | 134 | 76.01 |
ispunct() Prototype
int ispunct(int ch);
The
ispunct() function checks if ch is a punctuation character as classified by the current C locale. By default, the punctuation characters are !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~.
The behaviour of
ispunct() is undefined if the value of ch is not representable as unsigned char or is not equal to EOF.
It is defined in <cctype> header file.
ispunct() Parameters
ch: The character to check.
ispunct() Return value
The
ispunct() function returns non zero value if ch is a punctuation character, otherwise returns zero.
Example: How ispunct() function works
#include <cctype> #include <iostream> using namespace std; int main() { char ch1 = '+'; char ch2 = 'r'; ispunct(ch1) ? cout << ch1 << " is a punctuation character" : cout << ch1 << " is not a punctuation character"; cout << endl; ispunct(ch2) ? cout << ch2 << " is a punctuation character" : cout << ch2 << " is not a punctuation character"; return 0; }
When you run the program, the output will be:
+ is a punctuation character r is not a punctuation character | https://cdn.programiz.com/cpp-programming/library-function/cctype/ispunct | CC-MAIN-2021-04 | refinedweb | 158 | 54.02 |
Hi.
I am currently learning c++, by reading and doing exersises if find online.
The exersis askes me to create a program that find the mean of 1,2,3 or 4 numbers.
I know this can be done easily by writing one "small program" for each of the situations, but I want to try to make it a bit more advance.
I want to use a char array where the numbers are seperated like this 1,2,3,7 and so one.
But I have a few questions, how can I make sure the input given is a number? I was thinking of converting it to ancii, but it has to be a better way?
And if I use a loop to assign the value from the char array to an int array, how can I do so that every whole number is placed at the same space? So that no[1] = 12, insted of no[1] = 1 and no[2] = 2
here is my code so far:
EDIT:EDIT:Code:
#include <cstdlib>
#include <iostream>
using namespace std;
int main()
{
char numbers[100];
int no[100];
//This will give the info about the program and how to use it
cout <<"This program will to two things:\n";
cout <<" - Find the average\n";
cout <<" - Find the standar diviation\n\n";
cout <<"How to use:\n";
cout <<"Enter the values you would like to use for this exsample,\n";
cout <<"please seperate them with a blank space " ":\n";
//store the info in a variable for later use
cin.getline ( numbers, 100, '\n' );
//loop through the string and exstract the number
//store it in a new variable
for (int i = 0; i < 100; i++)
{
if (numbers[i] == ",")
{
//It is a seperator, nothing happesns
//The loop just continius
}
else
{
no[i] = numbers[i];
}
}
cout <<"\n\n\n";
system("PAUSE");
return 0;
}
Why is the line "if (numbers[i] == ",")" not valid?
thanks in advance | http://cboard.cprogramming.com/cplusplus-programming/84335-loop-char-problem-printable-thread.html | CC-MAIN-2015-40 | refinedweb | 320 | 67.01 |
On Wed, 2007-02-07 at 14:12 -0600, Jason L Tibbitts III wrote: > >>>>> "JO" == Joe Orton <jorton redhat com> writes: > > JO> Seriously guys, I've said it two times already: if you want me to > JO> repaint my bikeshed, first convince the central packaging > JO> committee on high that your particular choice of bikeshed colour > JO> not mine must be mandated across the distro. > > Frankly we expected that many Rad Hat folks would simply not want to > change things, and I suppose my personal hope of not having to > involve the packaging committee in making tons of trivial rules in > order to force people to change things is dashed by this very thread. > But let's examine this specific situation in detail: > > The main restriction against using $RPM_SOURCE_DIR is that you can in > no way ever write anything to it. That is the primary issue, and is > implicitly given in other guidelines. The package in question does > not write to $RPM_SOURCE_DIR if I understand correctly. > > For the package in question, $RPM_SOURCE_DIR can rather trivially be > replaced by SourceN: and %{SOURCEN} tags. Is that correct? > > Are there situations where $RPM_SOURCE_DIR cannot be easily replaced > by the use of SourceN: and %{SOURCEN} tags? I can think of situations > like looping over source files (which frankly I've wished I could do > in the past) which are at best a good bit more difficult, but perhaps > someone has the necessary wizardly knowledge. I'm talking about doing > things like copying ten source files into the buildroot without > actually listing out ten %{SOURCEN} bits. > > The consistency argument for using SourceN: and %{SOURCEN} tags > instead of $RPM_SOURCE_DIR is obvious. > > Is there a technical argument for using SourceN: and %{SOURCEN} tags > instead of $RPM_SOURCE_DIR? I'm afraid I don't know it. Here are reasons for using RPM_SOURCE_DIR over SOURCEn 1) install "${RPM_SOURCE_DIR}/foobar dst" is *much* more readable than "install ${SOURCEN} dst", it means as a mere human being I do not have to act as a macro preprocessor in order to read the spec file and know what is being installed where. Readability is a huge win! 2) you can loop, I do it all the time for f in file1 file2; do install ${RPM_SOURCE_DIR}/$f dst done 3) SOURCEn is a mechanism for source RPM manifests, if you want to use it elsewhere then by all means go for it, but don't lose sight of the fact it's a manifest directive with the side effect of introducing a symbol into the rpm namespace, use of that symbol is purely optional. 4) It's none of your business how I implement something as long as its not broken. Forcing every spec file to replace $RPM_SOURCE_DIR with $SOURCEn is consistency without merit. Ralph Waldo Emerson wrote "foolish consistency is the hobgoblin of little minds" It's my choice as to what constitutes a maintainable spec file based on value judgments and experience. +1 for Joe Orton -1 for the Bureaucrats in People's Central Packaging Committee and their police who seek to banish me to the gulag for crimes against spec files by virtue of improper thinking ;-) -- John Dennis <jdennis redhat com> Learn. Network. Experience open source. Red Hat Summit San Diego | May 9-11, 2007 Learn more: | https://www.redhat.com/archives/fedora-maintainers/2007-February/msg00238.html | CC-MAIN-2015-18 | refinedweb | 546 | 64.04 |
PyMC3 and Theano¶
What is Theano¶
Theano is a package that allows us to define functions involving array operations and linear algebra. When we define a PyMC3 model, we implicitly build up a Theano function from the space of our parameters to their posterior probability density up to a constant factor. We then use symbolic manipulations of this function to also get access to its gradient.
Note that the original developers have stopped maintaining Theano, so PyMC3 uses Theano-PyMC, a fork of Theano maintained by the PyMC3 developers.
For a thorough introduction to Theano see the theano docs, but for the most part you don’t need detailed knowledge about it as long as you are not trying to define new distributions or other extensions of PyMC3. But let’s look at a simple example to get a rough idea about how it works. Say, we’d like to define the (completely arbitrarily chosen) function
First, we need to define symbolic variables for our inputs (this is similar to eg SymPy’s Symbol):
import pymc as pm import numpy import theano import theano.tensor as tt # We don't specify the dtype of our input variables, so it # defaults to using float64 without any special config. a = tt.scalar('a') x = tt.vector('x') # `tt.ivector` creates a symbolic vector of integers. y = tt.ivector('y')
Next, we use those variables to build up a symbolic representation of the output of our function. Note that no computation is actually being done at this point. We only record what operations we need to do to compute the output:
inner = a * x**3 + y**2 out = tt.exp(inner).sum()
Note
In this example we use tt.exp to create a symbolic representation of the exponential of inner. Somewhat surprisingly, it would also have worked if we used np.exp. This is because numpy gives objects it operates on a chance to define the results of operations themselves. Theano variables do this for a large number of operations. We usually still prefer the theano functions instead of the numpy versions, as that makes it clear that we are working with symbolic input instead of plain arrays.
Now we can tell Theano to build a function that does this computation. With a typical configuration, Theano generates C code, compiles it, and creates a python function which wraps the C function:
func = theano.function([a, x, y], [out])
We can call this function with actual arrays as many times as we want:
a_val = 1.2 x_vals = np.random.randn(10) y_vals = np.int32(np.random.randn(10)) out = func(a_val, x_vals, y_vals)
For the most part the symbolic Theano variables can be operated on like NumPy arrays. Most NumPy functions are available in theano.tensor (which is typically imported as tt). A lot of linear algebra operations can be found in tt.nlinalg and tt.slinalg (the NumPy and SciPy operations respectively). Some support for sparse matrices is available in theano.sparse. For a detailed overview of available operations, see the theano api docs.
A notable exception where theano variables do not behave like NumPy arrays are operations involving conditional execution.
Code like this won’t work as expected:
a = tt.vector('a') if (a > 0).all(): b = tt.sqrt(a) else: b = -a
(a > 0).all() isn’t actually a boolean as it would be in NumPy, but still a symbolic variable. Python will convert this object to a boolean and according to the rules for this conversion, things that aren’t empty containers or zero are converted to True. So the code is equivalent to this:
a = tt.vector('a') b = tt.sqrt(a)
To get the desired behaviour, we can use tt.switch:
a = tt.vector('a') b = tt.switch((a > 0).all(), tt.sqrt(a), -a)
Indexing also works similarly to NumPy:
a = tt.vector('a') # Access the 10th element. This will fail when a function build # from this expression is executed with an array that is too short. b = a[10] # Extract a subvector b = a[[1, 2, 10]]
Changing elements of an array is possible using tt.set_subtensor:
a = tt.vector('a') b = tt.set_subtensor(a[:10], 1) # is roughly equivalent to this (although theano avoids # the copy if `a` isn't used anymore) a = np.random.randn(10) b = a.copy() b[:10] = 1
How PyMC3 uses Theano¶
Now that we have a basic understanding of Theano we can look at what happens if we define a PyMC3 model. Let’s look at a simple example:
true_mu = 0.1 data = true_mu + np.random.randn(50) with pm.Model() as model: mu = pm.Normal('mu', mu=0, sigma=1) y = pm.Normal('y', mu=mu, sigma=1, observed=data)
In this model we define two variables: mu and y. The first is a free variable that we want to infer, the second is an observed variable. To sample from the posterior we need to build the function
where with the normal likelihood \(N(x|μ,σ^2)\)
To build that function we need to keep track of two things: The parameter space (the free variables) and the logp function. For each free variable we generate a Theano variable. And for each variable (observed or otherwise) we add a term to the global logp. In the background something similar to this is happening:
# For illustration only, these functions don't exactly # work this way! model = pm.Model() mu = tt.scalar('mu') model.add_free_variable(mu) model.add_logp_term(pm.Normal.dist(0, 1).logp(mu)) model.add_logp_term(pm.Normal.dist(mu, 1).logp(data))
So calling pm.Normal() modifies the model: It changes the logp function of the model. If the observed keyword isn’t set it also creates a new free variable. In contrast, pm.Normal.dist() doesn’t care about the model, it just creates an object that represents the normal distribution. Calling logp on this object creates a theano variable for the logp probability or log probability density of the distribution, but again without changing the model in any way.
Continuous variables with support only on a subset of the real numbers are treated a bit differently. We create a transformed variable that has support on the reals and then modify this variable. For example:
with pm.Model() as model: mu = pm.Normal('mu', 0, 1) sd = pm.HalfNormal('sd', 1) y = pm.Normal('y', mu=mu, sigma=sd, observed=data)
is roughly equivalent to this:
# For illustration only, not real code! model = pm.Model() mu = tt.scalar('mu') model.add_free_variable(mu) model.add_logp_term(pm.Normal.dist(0, 1).logp(mu)) sd_log__ = tt.scalar('sd_log__') model.add_free_variable(sd_log__) model.add_logp_term(corrected_logp_half_normal(sd_log__)) sd = tt.exp(sd_log__) model.add_deterministic_variable(sd) model.add_logp_term(pm.Normal.dist(mu, sd).logp(data))
The return values of the variable constructors are subclasses of theano variables, so when we define a variable we can use any theano operation on them:
design_matrix = np.array([[...]]) with pm.Model() as model: # beta is a tt.dvector beta = pm.Normal('beta', 0, 1, shape=len(design_matrix)) predict = tt.dot(design_matrix, beta) sd = pm.HalfCauchy('sd', beta=2.5) pm.Normal('y', mu=predict, sigma=sd, observed=data) | https://docs.pymc.io/en/v3/PyMC3_and_Theano.html | CC-MAIN-2022-21 | refinedweb | 1,200 | 51.04 |
”Prototyping is the conversation you have with your ideas”Tom Wujec
This is the fifth part of the series where we see our theoretical foundation on machine translation come to fruition..( This post)
- Build the production grade code for the training module using Python scripts.
- Building the Machine Translation application -From Prototype to Production : Inference process
- Build the machine translation application using Flask and understand the process to deploy the application on Heroku
In the previous 4 posts we understood the solution landscape for machine translation ,explored different architecture choices for sequence to sequence models and did a deep dive into the forward pass and back propagation algorithm for LSTMs. Having set a theoretical foundation on the application, it is time to build a prototype of the machine translation application. We will be building the prototype using a Google Colab / Jupyter notebook.
Building the prototype
The prototype building phase will consist of the following steps.
- Preprocessing the raw data for machine translation
- Preparing the train and test sets
- Building the encoder – decoder architecture
- Training the model
- Getting the predictions
Let us get started in building the prototype of the application on a notebook
Downloading the raw text
Let us first grab the raw data for this application. The data can be downloaded from the link below.
This is also available in the github repository. The raw text consists of English sentences paired with the corresponding German sentence. Once the data text file is downloaded let us upload the data in our Google drive. If you do not want to do the prototype in Colab, you can download it in your local drive and then use a Jupyter notebook also for the purpose.
Preprocessing the text
Before starting the processes, let us import all the packages we will be using for the process
import string import re from numpy import array, argmax, random, take from numpy.random import shuffle import pandas as pd from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM, Embedding, RepeatVector from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import load_model from tensorflow.keras import optimizers import matplotlib.pyplot as plt % matplotlib inline pd.set_option('display.max_colwidth', 200) from pickle import dump from unicodedata import normalize from tensorflow.keras.models import load_model
The raw text which we have downloaded needs to be opened and progressively preprocessed through series of processing steps to ultimately get the train and test set which we require for building our models. Let us first define the path for the text, so as to take it from the google drive. This path has to be changed by you based on the path in which you load the data
# Define the path to the raw data set fileurl = '/content/drive/My Drive/Bayesian Quest/deu.txt'
Once the path is defined, let us read the text data.
# open the file file = open(fileurl, mode='rt', encoding='utf-8') # read all text text = file.read()
The text which is read from the text file would be in the format shown below
text[0:200]
From the output we can see that each record is seperated by a line (\n) and within each record the data we want is seperated by tabs (\t).So we can first split each record on new lines (\n) and after that each line we split on the tabs (\t) to get the data in the format we want
# Split the text into individual lines lines = text.strip().split('\n') # Splitting each line based on tab spaces and creating a list lines = [line.split('\t') for line in lines] # Visualizing first 5 lines lines[0:5]
We can see that the processed records are stored as lists with each list containing an enlish word, its German translation and some metadata about the data. Let us store these lists as an array for convenience and then display the shape of the array.
# Storing the lines into an array mtData = array(lines) # Displaying the shape of the array print(mtData.shape)
All the above steps we can represent as a function. Let us construct the function which will be used to load the data and do basic preprocessing of the data.
# function to read raw text file def read_text(filename): # open the file file = open(filename, mode='rt', encoding='utf-8') # read all text text = file.read() # Split the text into individual lines lines = text.strip().split('\n') # Splitting each line based on tab spaces and creating a list lines = [line.split('\t') for line in lines] file.close() return array(lines)
We can call the function to load the data and convert it into an array of English and German sentences. We can also see that the raw data has more than 200,000 rows and three columns. We dont require the third column and therefore we can eliminate them. In addition processing all rows would also be computationally expensive. Let us take the first 50000 rows. However this decision is left to you on how many rows you want based on the capacity of your machine.
# Reading the data using the function mtData = read_text(fileurl) # Taking only 50000 rows of data mtData = mtData[:50000,:2] print(mtData.shape) mtData[0:10]
With the array format, the data is in a neat format with the first column being English and the second one the corresponding German sentence. However if you notice the text, there are lot of punctuations and other characters which are unwanted. We also need to standardize the text to lower case. Let us now crank up our cleaning process. The following are the processes which we will follow
- Normalize all unicode characters,which are special characters found in a language, to its corresponding ascii format. We will be using a library called ‘unicodedata’ for this normalization.
- Tokenize the string to individual words
- Convert all the characters to lower case
- Remove all punctuations from the text
- Remove all non alphabets from text
Since there are multiple processes involved we will be wrapping all these processes in a function. Let us look at the code which implements this.
# Cleaning the document for all unwanted characters def cleanDocs)
The input to the function is the array which we created in the earlier step. We first initialize some empty lists to store the processed text in
Line 3.
Lines 5 – 7, we loop through each row ( docs) and then through each column (line) of the row. The first process is to normalize the special characters . This is done through the normalize function available in the ‘unicodedata’ package. We use a normalization method called ‘NFD’ which maintains the same form of the characters in
lines 9-10. The next process is to tokenize the string to individual words by applying the split() function in
line 12. We then proceed to remove all unwanted punctuations using the translate() function in
line 14 . After this process we convert the text to lower case and then retain only the charachters which are alphabets using the isalpha() function in
lines 16-18. We join the individual columns within a row using the join() function and then store the processed row in the ‘cleanArray’ list in
lines 20-21. The final output after the whole process looks quite clean and is ready for further processing.
# Cleaning the sentences cleanMtDocs = cleanDocs(mtData) cleanMtDocs[0:10]
Nueral Translation Data Set Preperation
Now that we have completed the initial preprocessing, its now time to get closer to the core process. Let us first prepare the data sets in the required format we want for modelling. The various steps which we will follow for preparation of data set are
- Tokenizing the text and creating vocabulary dictionaries for English and German sentences
- Define the sequence length for both English and German text
- Encode the text sequences as integer sequences
- Split the data set into train and test sets
Let us see each of these processes
Tokenization and vocabulary creation
Tokenization is the process of splitting the string to individual unique words or tokens. So if the string is
"Hi I am enjoying this learning and I look forward for more"
The unique tokens vocabulary would look like the following
{'i': 1, 'hi': 2, 'am': 3, , 'enjoying': 4 , 'this': 5 , 'learning': 6 'and': 7, , 'look': 8 , 'forward': 9, 'for': 10, 'more': 11}
Note that only unique words are taken and each token is given an index which will come in handy when we encode the tokens in later steps. So let us go ahead and prepare the tokens. Please note that we will be creating seperate vocabulary for English words and German words.
# Instantiating the tokenizer class tokenizer = Tokenizer()
The function which does tokenization is the Tokenizer() class which could be imported from tensorflow.keras as shown above. The first step is to instantiate the Tokenizer() class. Next we will see how to fit text to the tokenizer object we created.
# Fit the tokenizer on the text tokenizer.fit_on_texts(string)
Fitting the text is done using the fit_on_texts() method. This method splits the strings and then creates the vocabulary we saw earlier. Since these steps have to be repeated multiple times, let us package them as a function
# Function for creating tokenizers def createTokenizer(lines): tokenizer = Tokenizer() tokenizer.fit_on_texts(lines) return tokenizer
Let us use the above function to create the tokenizer for English words and look at the total length of words in English
# Create English Tokenizer eng_tokenizer = createTokenizer(cleanMtDocs[:,0]) eng_vocab_size = len(eng_tokenizer.word_index) + 1 print(eng_vocab_size)
We can see that the length of the English vocabulary is 6255. This is after we incremented the actual vocabulary size with 1 to account for any words which is not part of the vocabulary. Let us list down the first 10 words of the English vocabulary.
# Listing the first 10 items of the English tokenizer list(eng_tokenizer.word_index.items())[0:10]
From the output we can see how the words are assigned an index value. Similary we will create the German vocabulary also
# Create German tokenizer ger_tokenizer = createTokenizer(cleanMtDocs[:,1]) # Defining German Vocabulary ger_vocab_size = len(ger_tokenizer.word_index) + 1
Now that we have tokenized the German and English sentences, the next task is to define a standard sequence length for these languges
Define Sequence lengths for German and English sentences
From our earlier introduction on sequence models, we know that we need data in sequences. A prerequisite in building sequence models is the sequences to be of standard lenght. However if we look at our corpus of both English and German sentences the lengths of each sentence will vary. We need to adopt a strategy for standardizing this length. One common strategy would be to adopt the maximum length of all the sentences as the standard sequence. Sentences which will have length lesser than the maximum length will have its indexes filled with zeros.However one pitfall of this strategy is, processing will be expensive. Let us say the length of the biggest sentence is 50 and most of the other sentences are of length ranging from 8 to 12. We have a situation wherein for just one sentence we unnecessarily increase the length of all other sentences by filling dummy values. When data sets become large, having all sentences standardized to the longest sentence will make the computation expensive.
To get over such issues we will adopt a strategy of finding a length under which majority of the sentences fall. This can be done by taking a high quantile value under which majority of the sentence lengths fall.
Let us implement this strategy. To start off we will have to count the lengths of all the sentences in the corpus
# Create an empty list to store all english sentence lenghts len_english = [] # Getting the length of all the English sentences [len_english.append(len(line.split())) for line in cleanMtDocs[:,0]] len_english[0:10]
In line 2 we first created an empty list
'len_english'. Next we iterated through all the sentences in the corpus and found the length of each of the sentences and then appended each sentence lengths to the list we created, line 4.
Similarly we will create the list of all German sentence lenghts.
len_German = [] # Getting the length of all the English sentences [len_German.append(len(line.split())) for line in cleanMtDocs[:,1]] len_German[0:10]
After getting a distribution of all the lengths of English sentences, let us find the quantile value at 97.5% under which majority of the sentences fall.
# Find the quantile length engLength = np.quantile(len_english, .975) engLength
From the quantile value we can see that a sequence length of 5.0 would be a good value to adopt as majority of the sentences would fall within this length. Similarly let us calculate for the German sentences also.
# Find the quantile length gerLength = np.quantile(len_German, .975) gerLength
We will be using the sequence lengths we have calculated in the next process where we encode the word tokens as sequences of integers.
Encode the sequences as integers
Earlier we tokenized all the unique words and created vocabulary dictionaries. In those dictionaries we have a mapping of the word and an integer value for the word. For example let us display the first 5 tokens of the english vocabulary
# First 5 tokens and its integers of English tokenizer list(eng_tokenizer.word_index.items())[0:5]
We can see that each tokens are associated with an integer value . In our sequence model we will be using the integer values instead of the tokens themselves. This process of converting the tokens to its corresponding integer values is called the encoding. We have a method called ‘texts_to_sequences’ in the tokenizer() to convert the tokens to integer sequences.
The standard length of the sequence which we calculated in the previous section will be the length of each of these integer encoding. However what happens if a sentence string has length more than the the standard length ? Well in that case the sentence string will be curtailed to the standard length. In the case of a sentence having length less than the standard length, the additional lengths will be filled with zeros. This process is called padding.
The above two processes will be implemented in a function for convenience. Let us look at the code implementation.
# Function for encoding and padding sequences def encode_sequences(tokenizer,length, lines): # Sequences as integers X = tokenizer.texts_to_sequences(lines) # Padding the sentences with 0 X = pad_sequences(X,maxlen=length,padding='post') return X
The above function takes three variables
tokenizer : Which is the language tokenizer we created earlier
length : The standard length
lines : Which is our data
In
line 5 each line is converted to sequenc of integers using the
'texts_to_sequences' method and then padded using pad_sequences method,
line 7. The parameter value of
padding = 'post' means that the zeros are added after the corresponding length of the sentence till the standard length is reached.
Let us now use this function to prepare the integer sequence data for both English and German sentences. We will split the data set into train and test sets first and then encode the sequences. Please remember that German sequences are our X variable and English sentences are our Y variable as we are translating from German to English.
# Preparing the train and test splits from sklearn.model_selection import train_test_split # split data into train and test set train, test = train_test_split(cleanMtDocs, test_size=0.1, random_state = 123) print(train.shape) print(test.shape)
# Creating the X variable for both train and test sets trainX = encode_sequences(ger_tokenizer,int(gerLength),train[:,1]) testX = encode_sequences(ger_tokenizer,int(gerLength),test[:,1]) print(trainX.shape) print(testX.shape)
Let us display first few rows of the training set
# Displaying first 5 rows of the traininig set trainX[0:5]
From the visualization of the training set we can see the integer encoding of the sequences and also padding of the sequences . Similarly let us repeat the process for English sentences also.
# Creating the Y variable both train and test trainY = encode_sequences(eng_tokenizer,int(engLength),train[:,0]) testY = encode_sequences(eng_tokenizer,int(engLength),test[:,0]) print(trainY.shape) print(testY.shape)
We have come to the end of the preprocessing steps. Let us now get to the heart of the process which is defining the model and then training the model with the preprocessed training data.
Nueral Translation Model Building
In this section we will look into the building blocks of the model. We will define the model structure in a function as shown below. Let us dive into details of the model
def defineModel(src_vocab,tar_vocab,src_timesteps,tar_timesteps,n_units): model = Sequential() model.add(Embedding(src_vocab,n_units,input_length=src_timesteps,mask_zero=True)) model.add(LSTM(n_units)) model.add(RepeatVector(tar_timesteps)) model.add(LSTM(n_units,return_sequences=True)) model.add(TimeDistributed(Dense(tar_vocab,activation='softmax'))) # Compiling the model model.compile(optimizer = 'adam',loss='sparse_categorical_crossentropy') # Summarising the model model.summary() return model
In the second article of this series we were introduced to the encoder-decoder architecture. We will be manifesting the encoder architecture within this code block. From the above code uptill line 5 is the encoder part and the remaining is the decoder part.
Let us now walk through each layer in this architecture.
Line 2 : Sequential Class
As you know neural networks, work on the basis of various layers stacked one after the other. In Keras, representation of the model as a stack of layers is initialized using a class called Sequential(). The sequential class is usable for most of the cases except in cases where one has to share multiple layers or have multiple inputs and outputs. For the latter case the functional API in keras is used. Since the model we have defined is quite straight forward, using sequential class will suffice.
Line 3 : Embedding Layer
A basic requirement for a neural network model is the input to be in numerical format. In our case our inputs are text format. So we have to convert this text into some numerical features. Word embedding is a very effective way of representing the sequence of texts in the form of numbers ensuring that the syntactic relationship between words in the sequence is also maintained.
Embedding layer in Keras can be explained in simple terms as a look up dictionary between the unique words in the vocabulary and the corresponding vector of that word. The vector for each word which is the representation of the semantic similarity is learned during the training process. The Embedding function within Keras requires the following parameters vocab_size, embedding_size and sequence_length
Vocab_size : The vocab size is required to initialize the matrix of unique words and its corresponding vectors. The unique indexes of each word is initialized based on the vocab size. Let us look at an example to illustrate this.
Suppose there are two sentences with the following words
‘Embedding gets the semantic relationship between words’
‘Semantic relationships manifests the context’
For demonstration purpose let us assume that the initial vector representation of these words are as shown in the table below.
Let us understand each of the parameters of the embedding layer based on the above table. In our model the vocab size for the encoder part is the German vocabulary size. This is represented as src_vocab, which stands for source vocabulary. For the toy example we considered, our vocab size is 9 as there are 9 unique words in the above table.
embedding size : The second parameter which needs to be supplied is the embedding size. This represents the size of the vector for each word in the matrix. In the example matrix shown above the vector size is 3. The size of the embedding vector is a parameter which can be altered to get the right semantic relationship between the sequences of words in the sentence
sequence length : The sequence length represents the number of words which are required in each input sentence. As seen earlier during preprocessing, a pre-requisite for the LSTM layer was for the length of sequences to be standardized. If a particular sequence has less number of words than the sequence length, it was padded with dummy vectors so that the length was standard. For illustration purpose let us assume that the sequence length = 10. The representation of these two sentence sequences in the vector form will be as follows
[Embedding, gets, the ,semantic, relationship, between, words] => [[0.02 , 0.01 , 0.12], [0.21 , 0.41 , 0.52], [0.22 , 0.61 , 0.02], [0.71 , 0.01 , 0.32], [0.85 ,-0.23 , -0.52], [0.21 , -0.45 , 0.62], [-0.29 , 0.91 , 0.052], [0.00 , 0.00, 0.00], [0.00 , 0.00, 0.00]]
[Semantic, relationships, manifests ,the, context] => [[0.71 , 0.01 , 0.32], [0.85 ,-0.23 , -0.52], [0.121 , 0.401 , 0.352] ,[0.22 , 0.61 , 0.02], [0.721 , 0.531 , -0.592], [0.00 , 0.00, 0.00], [0.00 , 0.00, 0.00], [0.00 , 0.00, 0.00], [0.00 , 0.00, 0.00], [0.00 , 0.00, 0.00]]
The last parameter
mask_zero = True is to inform the Model that some part of the data is padding data.
The final output from the embedding layer after providing all the above inputs will be a three dimensional matrix of the following shape (No. of samples ,sequence length , embedding size). Let us view this pictorially
As seen from the above figure, let each rectangular block represent the vector representation of a word in the sequence. The depth of the block will be the embedding size dimensions. Multiple words along the ‘X’ axis will form a sequence and multiple such sequences along the ‘Y’ axis will represent the number of examples we have in the corpora.
Line 4 : Sequence to sequence Layer (LSTM)
The next layer in the model is the sequence to sequence layer which in our case is a LSTM. We discussed in detail the dynamics of the LSTM layer in the third and fourth articles of the series. The number of hidden units is defined as a parameter when defining the LSTM unit.
Line 5 : Repeat Vector
In our machine translation application, we need to produce output which is equal in length with the standard sequence length of the target language ( English) . However our input at the encoder phase is equal in length to the source sequence ( German ). We therefore need a mechanism to map the output from the encoder phase to the number of sequences of the decoder phase. A ‘Repeat Vector’ is that operation which maps the input sequences (German sequence) to that of the output sequences ( English sequence). The below figure gives a pictorial representation of the operation.
As seen in the figure above we have to match the output from the encoder and the decoder. The sequence length of the encoder will be equal to the source sequence length ( German) and the length of the decoder will have to be the length of the target sequence ( English). Repeat vector can be described as a trick to match them. The output vector of the encoder where the information of the complete sequence is encoded is repeated in this operation. It is important to note that there are no weights and parameters in this operation.
Line 6 : LSTM Layer ( with return sequence is true)
The next layer is another LSTM unit. The dynamics within this unit is the same as the previous LSTM unit. The only difference in the output. In the previous LSTM unit we never had any output from each of the sequences. The output sequences is controlled by the parameter
return_sequences. By default it is ‘False’. However in this case we have specified the
return_sequences = True . This means that we need to have an output from each of the sequences. When we keep the
return_sequences = False only the last sequence will have an output.
Line 7 : Time Distributed – Dense Layer with Softmax activation
This is the final layer of the network. This layer receives the output from the pervious LSTM layer which has outputs equal to the target sequence. Each of these sequences are then connected to a dense layer or a fully connected layer. Dense layer in Keras is synonymous to the dot product of the output and weight matrix along with addition of the bias term.
Dense = dot(Wy , Y) + by
Wy = Weight matrix of the Dense layer
Y = Output from each of the LSTM sequence
by = bias term for each sequence
After the dense operation, the resultant vector is taken through a softmax layer which converts the output to a probability distribution around the vocabulary of the target language. Another term to note is the command
Time distributed. This implies that each sequence output which we get out of the LSTM layer has to be applied to a separate dense operation and a subsequent Softmax layer. So at the end of all the operation we will get a probability distribution around the target vocabulary from each of the output
Line 9 Optimizer
In this layer the optimizer function and the loss functions are defined. The loss function we have defined is
sparse_cross entropy, which is beneficial from a training perspective. If we use
categorical_cross entropy we would require one hot encoding of the output matrix which can be very expensive to train given the huge size of the target vocabulary. Sparse_cross entropy gives us a great alternate.
Line 11 Summary
The last line is the summary of the model. Let us try to unravel each of the parameters of the summary level based on our understanding of the LSTM
The summary displays the model layer by layer the way we built it. The first layer is the embedding layer where the output shape is
(None,6,256). None stands for the number of examples we have. The other two are the length of the source sequence
( src_timesteps = gerLength) and the embedding size
( 256 ).
Next we applied a LSTM layer with
256 hidden units which is represented as
(None , 256 ). Please note that we will only have one output from this LSTM layer as we have not specified
return_sequences = True.
After the single LSTM layer we have the repeat vector operations which copies the single output of the LSTM to a length equal to the target language length
(engLength = 5).
We have another LSTM layer after the repeat vector operation. However in this LSTM layer we have defined the output as
return_sequences=True . Therefore we have outputs of 256 units each for each of the sequence resulting in the output dimension of
( None, 5 , 256).
Finally we have the time distributed dense layer. We earlier saw that the time distributed dense layer will be a dense operation on each of the time sequence. Each sequence will be of the form
Dense = dot(Wy , Y) + by. The weight matrix Wy will have a dimension of
(256,6225
) where 6225 is the dimension of the target vocabulary
( eng_vocab_size = 6225).
Y is the output from each of the LSTM layer from the previous layer which has a dimension ( 1, 256 ). So the dot product of both these matrices will be
[ 1, 256 ] x [256,6225] = >> [1, 6225]
The above is for one time step. When there are 5 time steps for the target language we will get a dimension of
( None , 5 , 6225)
Model fitting
Having defined the model and the optimization function its time to fit the model on the data.
# Fitting the model checkpoint = ModelCheckpoint('model1.h5',monitor='val_loss',verbose=1,save_best_only=True,mode='min') model.fit(trainX,trainY,epochs=50,batch_size=64,validation_data=(testX,testY),callbacks=[checkpoint],verbose=2)
The initiation of both the forward and backward propagation is through the
model.fit function. In this function we provide the inputs (trainX and trainY), the number of epochs , the batch size for each pass of the optimizing function and also the validation set. We also define the checkpointing to save our models based on the validation score. The model fitting process or training process is a time consuming step. During the train phase the forward pass, error identification and the back propogation processes will kick in.
With this we come to the end of the training process. Let us look back and summarize the model architecture to get a big picture of the process.
Model Big picture
Having seen the model components, let us now get a big picture as to the whole process and how the forward and back propagation work together to learn the required parameters from the data.
The start of the process is the creation of the features for the model namely the embedding layer. The inputs for the input layer are the source vocabulary size, embedding size and the length of the sequences. The output we get out of this is a three dimensional matrix with number of examples, sequence length and the embedding size as the three dimensions.
The embedding layer is then supplied to the first LSTM layer as input with each time step receiving an embedding layer . There will not be any output for each time step of the sequence. The only output will be from the last time step which is then given as input to the next LSTM layer. The number of time steps of the second LSTM unit will be the equal to length of the target language sequence. To ensure that the LSTM has inputs equal to the target sequences, the repeat vector function is used to copy the output from the previous LSTM layer to all the time steps of the second LSTM layer.
The second LSTM layer will given intermediate outputs for each of the time steps. Each of these outputs are then fed into a dense layer. The output of the dense layer will be a vector equal to the vocabulary length of the target language. This vector is then passed on to the softmax layer to convert it into a probability distribution around the target vocabulary. The output from the softmax layer, which is the prediction is compared with the actual label and the difference would be the error.
Once the error is generated, it has to be back propagated to all the parts of the network to get the gradients of each of the parameters. The error will start propagating first from the dense layer and then would propagate to each of the sequence of the second LSTM unit. Within the LSTM unit the error will start propogating from the last sequence and then will progressively move towards the first sequence. During the movement of the error from the last sequence to the first, the respective errors from each of the sequences are added to the propagated error so as to get the gradients. The final weight gradient would be sum of the gradients obtained from each of the sequence of the LSTM as seen from the numerical example on back propagation. The gradient with respect to each of the inputs will also be calculated by summing across all the time step. The sum total of the gradients of the inputs from the second LSTM layer will be propagated back to the first LSTM layer.
In the first LSTM layer, the gradient received from the top layer will be propagated from the last time sequence. The error propagates progressively through each time step. In this LSTM there will not be any error to be added at each sequence as there were no output for each of the sequence except for the last layer. Along with all the weight gradients , the gradient vector for the embedding vector is also calculated. All these operations are carried out for all the epochs and finally the model weights are learned, which help in the final prediction.
Once the training is over, we get the most optimised parameters inside the model object. This model object is then used to predict on the test data set. Let us now look at the prediction or inference phase of the process.
Inference Process
The proof of the pudding of the model we created is the predictions we get from a test set. Let us first look at how the predictions would be from the model which we just created
# Generating the predictions prediction = model.predict(testX,verbose=0) prediction.shape
We get the prediction from the model using
model.predict() method with the test data as its input. The prediction we get would be of shape
( num_examples, target_sequence_length,target_vocabulary_size). Each example will be a sequence of probability distribution around the target vocabulary. For each sequence the predicted word would be the index of the vocabulary where the probability is the greatest. Let us demonstrate this with a figure.
Let us assume that the vocabulary has only three words
[ I , Learning , Am] with indexes as
[1,2,3] respectively. On predicting with the model we will get a probability distribution on each sequence as shown in the figure above. For the first sequence the probability for the first index word is
0.6 and the other two are
0.2 and
0.2 resepectively. So from the probability distribution the word in the first index has the largest probability and that will be the predicted word for that sequence. So based on the index with the maximum probability for the entire sequence we get the predictions as
[1,3,2] which translates to
[I , Am, Learning] as per the vocabulary.
To get the index of each of the sequences, we use a function called
argmax(). This is how the code to get the indexes of the predictions will look
# Getting the prediction index along the last axis ( Vocabulary size axis) predIndex = [argmax(vector,axis = -1) for vector in prediction] predIndex[0:3]
In the above code axis = -1 means that the argmax has to be taken on the last dimension of the prediction which is along the vocabulary dimension. The prediction we get will be in the form of sequences of integers having the same sequence length as the target vocabulary.
If we look at the first 3 predictions we can see that the predictions are integers which have to be converted to the corresponding words. This can be done using the tokenizer dictionary we created earlier. Let us look at how this is done
# Creating the reverse dictionary reverse_eng = eng_tokenizer.index_word
The
index_word, method of the tokenizer class generates the word for an input index. In the above step we have created a dictionary called
reverse_eng which outputs a word when given an index. For a sequence of predictions we have to loop through all the indexes of the predictions and then generate the predicted words as shown below.
# Converting the tokens to a sentence preds = [] for pred in predIndex[0]: if pred == 0: continue preds.append(reverse_eng[pred]) print(' '.join(preds))
In the above code block in line 2 we first initialized an empty list
preds . We then iterated through each of the indexes in lines 3-6 and generated the corresponding word for the index using the
reverse_eng dictionary. The generated words are finally appended to the
preds list. We joined all the words in the list together get our predicted sentence.
Let us now package all the inference code we have seen so far into two functions.
# Creating a function for converting sequences def Convertsequence(tokenizer,source): target = list() reverse_eng = tokenizer.index_word for i in source: if i == 0: continue target.append(reverse_eng[int(i)]) return ' '.join(target)
The first function is to convert the sequence of predictions to a sentence.
#
The second function is to generate predictions from the test set and then generate the predicted sentence. The first function we defined is used inside the
generatePredictions function.
Now that we have understood how the predictions can be generated let us go ahead and generate predictions for the first 20 examples of the test set and evaluate the results.
# Generate predictions predSent = generatePredictions(model,eng_tokenizer,testX[0:20,:])
for i in range(len(testY[0:20])): targetY = Convertsequence(eng_tokenizer,testY[i:i+1][0]) print("Original sentence : {} :: Prediction : {}".format([targetY],[predSent[i]]))
From the output we can see that the predictions are pretty close in a lot of the examples. We can also see that there are some instances where the context is understood and predicted with different words like the examples below
There are also predictions which are way off the target
However considering the fact that the model we used was simple and the data set we used were relatively small, the model does a reasonably okay job.
Inference on your own sentences
Till now we predicted on the test set. Let us see how we can generate predictions from an input sentence we provide.
To generate predictions from our own input sentences, we have to first clean the input sentences and then tokenize them to transform it to the format the model understands. Let us look at the functions which does these tasks.
def cleanInput(lines): cleanSent = [] cleanDocs = list() for docs in lines)
The first function is the cleaning function. This is an abridged version of the cleaning function we used for our original data set. The second function we will use is the
encode_sequences function we used earlier. Using these functions let us go ahead and generate our predictions.
# Trying different input sentences inputSentence = 'Es ist ein großartiger Tag' # It is a great day ?
The first sentence we will try is the German equivalent of
'It is a great day ?'.
Let us clean the input text first using the function we developed
# Clean the input sentence cleanText = cleanInput(inputSentence)
Next we will encode this sentence into sequence of integers
# Encode the inputsentence as sequence of integers seq1 = encode_sequences(ger_tokenizer,int(gerLength),cleanText)
Let us get our predictions and print them out
# Generate the prediction predSent = generatePredictions(model,eng_tokenizer,seq1) print("Original sentence : {} :: Prediction : {}".format([cleanText[0]],predSent))
Its not a great prediction isnt it ?? Let us try couple more sentences
inputSentence1 ='Heute wird es regnen' # it's going to rain Today inputSentence2 ='Ich habe im Radio gesprochen' # I spoke on the radio for sentence in [inputSentence1,inputSentence2]: cleanText = cleanInput(sentence) seq1 = encode_sequences(ger_tokenizer,int(gerLength),cleanText) # Generate the prediction predSent = generatePredictions(model,eng_tokenizer,seq1) print("Original sentence : {} :: Prediction : {}".format([cleanText[0]],predSent))
We can see that the predictions on our own sentences are not promising .
Why is it that the test set gave us reasonable predictions and our own sentences are not giving good predicitons ? Well one obvious reason is that the distribution of words we used could be different from the distribution which was used for training. Besides,the model we used was a simple one and the data set also relatively small. All these could be the reasons for bad predictions on our own sentences. So how do we improve the quality of predictions ? There are different ways to do that. Let us see some of them.
- Use bigger data set for training and train for longer epochs.
- Change the model architecture. Experiment with different number of units and number of layers. Try variations like bidirectional LSTM
- Try out different regularization methods like drop out.
- Use attention mechanisms
There are different avenues for improvement. I would urge you to try out different choices and let me know how your fared.
Next Steps
Congratulations, we have successfully built a prototype for machine translation system. The next step in our journey is to convert this prototype into an application. We will address that in the next post.
Go to article 6 of this series : From prototype to production !!!!
4 thoughts on “V : Build and deploy data science products: Machine translation application-Develop the prototype”
It is a free course. Please reply me. | https://bayesianquest.com/2020/10/24/v-build-and-deploy-data-science-products-machine-translation-application-develop-the-prototype/ | CC-MAIN-2022-40 | refinedweb | 6,649 | 53 |
Obstacle Avoidance Robot Car - Arduino
- Login or register to post comments
- by Frank
- Collected by 16 users
The robot navigates around avoiding obstacles by ultrasound and choosing the best way to follow.
The programming code is attached here as a .txt file. Download it and paste the text on the Arduino compiler.
Remember to add the IRremote library (also attached to this post) to your arduino software before trying to compile the code!
Further information about motor controller and IR receiver here:
I bought the robot from:
doubt
Please send me schematic capture or writing diagram of project
Thank you for sharing . I have got code . But i don't have neither schematic capture nor writing diagram of project . please send it for me .
My gmail is : timlaibautroi090792@gmail.com
Thank you so much !
Help with wiring required with this code
I have bought kit, sensor shield is v4.0 , can you please help me in wiring this ? i am really confused and i am very new to this....
i have tried searching on google so far i am not able to figure it out completely.
please i need help
You may check my wiring in
You may check my wiring in the pictures i posted earlier in this blog. Look for them. Good luck!
Wiring motor signals to sensor shield V5
Hello,
i have the same robot, i have the same robot. But cannot start motors for drive the wheels.I wired the robot as in description here.
May I ask question about wiring.
1) did you connect 5V and GND contacts of motor shield to Sensor shield V5?
2) Did you connect ENA and ENB port of motor shield to sensor shield V5? In your code these ports are not declared. You dont use them? How you enable motors to start?
Wiring confusion
Can you please help me in the wiring? and where should i put the ENA & ENB pins?
i put the wires just like
i put the wires just like you said, downloaded the IR library and installed it, and uploaded the script. Now all that happens when I press the '1' button is the robot judt keeps moving forward. I tried to move the head thinking that the servo was stuck, but the head (the ultrasonic sensor) would just vibrate in its place. You got any idea about what's happening?
Microservo must be connected
Microservo must be connected to pin number 5.. just in case
Check the wiring of the
Check the wiring of the microservo and ensure that you are using the correct pin according to the programming code.
thank you soooo much for the wiring.
Sorry if im asking too much, but if you're an expert programmer, please help me with this:
i downloaded a script for arduino: (this is part of it)
#include "Ultrasonic.h"
#include <Servo.h>
#include <Motor.h>
Ultrasonic ultrasonic(12,13); // 12->trig, 13->echo
Servo myservo; // create servo object to control a servo
Motor motor;
it's showing me this and not letting me compile:'Ultrasonic' does not name a type
What does this mean? And what should i do to fix it? | http://letsmakerobots.com/node/40502 | CC-MAIN-2015-32 | refinedweb | 528 | 83.56 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Some LEDs are brighter than others on my LED array
First off, I've really been enjoying getting to know my NerdKit. It's really an outstanding product. I've learned quite a bit already thanks to this little guy.
I'm having a little snag building my LED array. I wired up a simple 12x5 LED array on a breadboard (I skipped the 24x5 size for later). For some reason the LEDs hooked up to PC0-PC5 columns are about half as bright as the ones wired up to PD2-PD7 columns at the same time. For my test program I am just turning all the columns high and the rows low, so that all my LEDs are on. My next step was going to be writing the interrupt code.
If I move the PD7 column wire over to the PD0 column, the LED is brighter. Furthermore, if i bypass the PB1-PB5 pins for the rows and hookup the rows directly to ground, they are bright on all the columns. So, the wiring all seems good.
As soon as I start adding and turning on PD columns, my PC columns start getting dimmer. It's like the PD pins have a stronger path to ground or something. My PC0-PC5 will be nice and bright, but each PD column I add makes them dimmer.
Any help or ideas would be appreciated. Maybe I damaged my AVR somehow or I got the wrong LEDs from digikey? I'm stumped! Thanks.
#define F_CPU 14745600
#include <avr/io.h>
#include <inttypes.h>
#include "../libnerdkits/delay.h"
#include "../libnerdkits/lcd.h"
int main() {
// rows
DDRB |= 1<< PB5;
DDRB |= 1<< PB4;
DDRB |= 1<< PB3;
DDRB |= 1<< PB2;
DDRB |= 1<< PB1;
// columns
DDRC |= 1<<PC5;
DDRC |= 1<<PC4;
DDRC |= 1<<PC3;
DDRC |= 1<<PC2;
DDRC |= 1<<PC1;
DDRC |= 1<<PC0;
DDRD |= 1<<PD2;
DDRD |= 1<<PD3;
DDRD |= 1<<PD4;
DDRD |= 1<<PD5;
DDRD |= 1<<PD6;
DDRD |= 1<<PD7;
while(1) {
PORTB &= ~(1 << PB5);
PORTB &= ~(1 << PB4);
PORTB &= ~(1 << PB3);
PORTB &= ~(1 << PB2);
PORTB &= ~(1 << PB1);
PORTC |= (1 << PC5);
PORTC |= (1 << PC4);
PORTC |= (1 << PC3);
PORTC |= (1 << PC2);
PORTC |= (1 << PC1);
PORTC |= (1 << PC0);
// PC0-PC5 are bright
delay_ms(3000);
PORTD |= (1 << PD2);
PORTD |= (1 << PD3);
PORTD |= (1 << PD4);
PORTD |= (1 << PD5);
PORTD |= (1 << PD6);
PORTD |= (1 << PD7);
// PC0-PC5 are dim
delay_ms(3000);
PORTD &= ~(1 << PD2);
PORTD &= ~(1 << PD3);
PORTD &= ~(1 << PD4);
PORTD &= ~(1 << PD5);
PORTD &= ~(1 << PD6);
PORTD &= ~(1 << PD7);
// PC0-PC5 are bright again
}
return 0;
}
What value of current-limiting resistors are you using?
Are you on battery power or an A/C adapter?
Are you exceeding any of the current limitations described in the Electrical Characteristics chapter of the datasheet? (20mA per pin, sinking 100mA total for PB0 through PB5 plus PD5 through PD7, sourcing 150mA total for PD0 through PD4, sourcing 150mA total for PB0 through PB5 plus PD5 through PD7, etc.)
I'm not using any resistors, the anodes and cathodes are wired up directly to the AVR, which is how I thought the DIY Marquee is wired. Did I miss a step in my enthusiasm?
I tried USB power and a regulated 5 volt power supply from my arcade game parts collection and got the same result. I'll double-check the regulated power supply results again, but i'm pretty sure they were the same as USB powered. I figured it's probably not good to power so many LEDs through the USB, but I didn't think it would hurt anything temporarily while building this thing.
The LEDs I ordered from digikey are part # 160-1701-ND (LED 5MM HI-EFF RED TRANS).
I'm not sure about exceeding the current limitations. Since I'm not using any resistors that sounds like the right track, but I didn't see any mention of resistors in the DIY marquee. Most of the testing I did was with USB power.
I never built the LED array project so I don't know how it's wired. If a single pin is sinking an entire row of 12 LEDs, you're limited to 20mA/12 = 1.7mA which is probably a very dim LED. If you're seeing six bright LEDs with a single pin sinking them you're probably exceeding the 20mA limit. If they get dimmer as you add more columns I'd say you're definitely exceeding the limits.
Yes, you can damage the MCU doing this. Stick to battery power to make that less likely while you're experimenting.
Hi Sinabyte,
In the LED Array Kit we get away with a lot of things because of the way we wired up the LEDs, and the fact we are pulsing them on and off (they are running at a fairly low duty cycle). This means that we don't really need to worry about current limiting the resistors since we won't leave them on enough to start drawing too much current. In your setup, you are trying to run more than one LED full blast, which would start limiting you where the MCU can't draw any more current.
Humberto
Thanks, that makes sense. I will try putting some new code in an interrupt and pulse the LEDs instead of torturing them with non-stop current in the main loop. Some basic math and experimenting should get me firing them at an appropriate cycle. Hopefully i didn't damage anything from flailing about and shorting wires.
I was using the LED Array Kit as a guide, including the code, since I guess I like to do things the hard way somewhat from scratch. Thanks again to both of you.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1720/ | CC-MAIN-2019-30 | refinedweb | 972 | 78.38 |
Drag tabs between QTabWidgets
- Joel Bodenmann
I have two
QTabWidgets in a
QSplitter. I'd like to be able to drag a tab from one
QTabWidgetto the other
QTabWidget. Can anybody tell me how I can archive this?
I saw this post:
However, my problem is that the
dragMoveEvent()from my
QTabWidgetsubclass never gets called. I moved one step further and subclassed
QTabBarbut I have the same problem there:
dragMoveEvent()is never called.
Any ideas? As far as I can tell there's no widget property that must be enabled to enable drags, that seems to only exist for drops.
If there's a better way to archive what I want to archive I'm all ears. After all eventually I'd like to have a visible "animation" when the tab is being dragged around.
- Chris Kawa Moderators
The post you linked to is in error. The drag move event takes place on the target widget, not when you want to start a drag from the source. So in that example code it should be
mouseMoveEvent, not
dragMoveEvent.
@Joel-Bodenmann said:
As far as I can tell there's no widget property that must be enabled to enable drags
There's no property but you must implement dragging yourself. See the docs for an example how to do it. That's for the source widget of the drag.
The target has to have dropping enabled. When something is dragged over it it will receive drag enter event. If that event is accepted the drag move events will follow when you move above the target and finally a drop event when you release the mouse button.
- Joel Bodenmann
@Chris-Kawa
Thank you very much for clearing things up! I thought that there was something fishy...
So if I am not mistaken I have to subclass
QTabBarfor this, and not
QTabWidget. Is that correct?
Can you tell me what I'd have to do to "animate" the moving tab? I'd like to have the tab (just the tab, thing with the string in it) under the mouse cursor while it is being dragged around.
- Chris Kawa Moderators
So if I am not mistaken I have to subclass QTabBar for this, and not QTabWidget. Is that correct?
Yes. You can still use QTabWidget for convenience and just set your customized tab bar on it.
Can you tell me what I'd have to do to "animate" the moving tab?
It's a bit complicated and I haven't actually tried it, so take this with a grain of salt, but I'd try something like this:
First - you need to "remove" the original tab from the source. For this you can simply hide the tab for the
exec()part of the drag and either show it back or permanently delete, depending on whether the drag was aborted or successful (you can get that from the exec return value).
Next part is showing the tab under cursor when it is being dragged. For that you need to set a pixmap on the drag object that will represent the tab. You can probably get it easily with render() of the tab widget. Just calculate the right region based on the geometry of the tab you're dragging.
Last part (and the hardest I think) is placing a tab in the target widget and moving it around. For that I'd create a "dummy" tab in the target in the drop enter event. If you get the drop leave event remove the dummy. If you get a drop event the dummy becomes the new tab. The tricky part is moving the tab around while the drag operation is still going on. I'd try to send a fake mouse press event to the tab widget in the drag enter, then fake mouse move events in the drop move event and finally a fake mouse release in the drop and leave drag events. This way the dummy tab should move along with the cursor when you move it.
Again, I haven't tried it, but I think it should be doable this way.
- Joel Bodenmann
Thank you very much, this helped a lot!
I got it working as per your recommendation. I will publish the resulting widget in the following days for people that stumble across the same issue in the future.
Had a similar Issue. Was able to find a solution. Below is a generic PyQt5 example that solves the problem using right click.
import sys from PyQt5.QtGui import * from PyQt5.QtWidgets import * from PyQt5.QtCore import * class Tabs(QTabWidget): def __init__(self, parent): super().__init__(parent) self.parent = parent self.setAcceptDrops(True) self.tabBar = self.tabBar() self.tabBar.setMouseTracking(True) self.indexTab = None self.setMovable(True) self.addTab(QWidget(self), 'Tab One') self.addTab(QWidget(self), 'Tab Two') def mouseMoveEvent(self, e): if e.buttons() != Qt.RightButton: return globalPos = self.mapToGlobal(e.pos()) tabBar = self.tabBar posInTab = tabBar.mapFromGlobal(globalPos) self.indexTab = tabBar.tabAt(e.pos()) tabRect = tabBar.tabRect(self.indexTab) pixmap = QPixmap(tabRect.size()) tabBar.render(pixmap,QPoint(),QRegion(tabRect)) mimeData = QMimeData() drag = QDrag(tabBar) drag.setMimeData(mimeData) drag.setPixmap(pixmap) cursor = QCursor(Qt.OpenHandCursor) drag.setHotSpot(e.pos() - posInTab) drag.setDragCursor(cursor.pixmap(),Qt.MoveAction) dropAction = drag.exec_(Qt.MoveAction) def dragEnterEvent(self, e): e.accept() if e.source().parentWidget() != self: return print(self.indexOf(self.widget(self.indexTab))) self.parent.TABINDEX = self.indexOf(self.widget(self.indexTab)) def dragLeaveEvent(self,e): e.accept() def dropEvent(self, e): print(self.parent.TABINDEX) if e.source().parentWidget() == self: return e.setDropAction(Qt.MoveAction) e.accept() counter = self.count() if counter == 0: self.addTab(e.source().parentWidget().widget(self.parent.TABINDEX),e.source().tabText(self.parent.TABINDEX)) else: self.insertTab(counter + 1 ,e.source().parentWidget().widget(self.parent.TABINDEX),e.source().tabText(self.parent.TABINDEX)) class Window(QWidget): def __init__(self): super().__init__() self.TABINDEX = 0 tabWidgetOne = Tabs(self) tabWidgetTwo = Tabs(self) layout = QHBoxLayout() self.moveWidget = None layout.addWidget(tabWidgetOne) layout.addWidget(tabWidgetTwo) self.setLayout(layout) if __name__ == '__main__': app = QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) | https://forum.qt.io/topic/67542/drag-tabs-between-qtabwidgets/4 | CC-MAIN-2018-30 | refinedweb | 1,018 | 60.61 |
Note: If you wish to use authentication with Google Protocol RPC, you can use the authentication currently available for App Engine
apps in the Google Cloud Console. You'll also need to specify the use login requirement in your
app.yaml
file. Other auth methodologies are not currently supported by the Google Protocol RPC library within
App Engine.
The Google Protocol RPC library is a framework for implementing HTTP-based remote procedure call (RPC) services. An RPC service is a collection of message types and remote methods that provide a structured way for external applications to interact with web applications. Because you can define messages and services in the Python programming language, it's easy to develop Protocol RPC services, test those services, and scale them on App Engine.
While you can use the Google Protocol RPC library for any kind of HTTP-based RPC service, some common use cases include:
- Publishing web APIs for use by third parties
- Creating structured Ajax backends
- Cloning to long-running server communication
You can define a Google Protocol RPC service in a single Python class that contains any number of declared remote methods. Each remote method accepts a specific set of parameters as a request and returns a specific response. These request and response parameters are user-defined classes known as messages.
The Hello World of Google Protocol RPC
This section presents an example of a very simple service definition that receives a message from a remote client. The message contains a user's name (
HelloRequest.my_name) and sends back a greeting for that person (
HelloResponse.hello):
from protorpc import messages from protorpc import remote from protorpc.wsgi import service package = 'hello' # Create the request string containing the user's name class HelloRequest(messages.Message): my_name = messages.StringField(1, required=True) # Create the response string class HelloResponse(messages.Message): hello = messages.StringField(1, required=True) # Create the RPC service to exchange messages class HelloService(remote.Service): @remote.method(HelloRequest, HelloResponse) def hello(self, request): return HelloResponse(hello='Hello there, %s!' % request.my_name) # Map the RPC service and path (/hello) app = service.service_mappings([('/hello.*', HelloService)])
Getting Started with Google Protocol RPC
This section demonstrates how to get started with Google Protocol RPC using the guestbook application developed in the App Engine Getting Started Guide (Python). Users can visit the guestbook (also included as a demo in the Python SDK) online, write entries, and view entries from all users. Users interact with the interface directly, but there is no way for web applications to easily access that information.
That's where Protocol RPC comes in. In this tutorial, we'll apply Google Protocol RPC to this basic guestbook, enabling other web applications to access the guestbook's data. This tutorial only covers using Google Protocol RPC to extend the guestbook functionality; it's up to you what to do next. For example, you might want to write a tool that reads the messages posted by users and makes a time-series graph of posts per day. How you use Protocol RPC depends on your specific app; the important point is that Google Protocol RPC greatly expands what you can do with your application's data.
To begin, you'll create a file,
postservice.py, which implements remote methods to access data in the guestbook application's datastore.
Creating the PostService Module
The first step to get started with Google Protocol RPC is to create a file called
postservice.py in your application directory. You'll use this file to define the new service, which implements two methods—one that remotely posts data and another that remotely gets data.
You don't need to add anything to this file now—but this is the file where you'll put all the code defined in the subsequent sections. In the next section, you'll create a message that represents a note posted to the guestbook application's datastore.
Working with Messages
Messages are the fundamental data type used in Google Protocol RPC. Messages are defined by declaring a class that inherits from the Message base class. Then you specify class attributes that correspond to each of the message's fields.
For example, the guestbook service allows users to post a note. If you haven't done so already, create a file called
postservice.py in your application directory and review the guestbook tutorial if you need to. In that tutorial, guestbook greetings are put in the datastore using the guestbook.Greeting class. The PostService will also use the Greeting class to store a post in the datastore. Let's define a message that represents such a note:
from protorpc import messages class Note(messages.Message): text = messages.StringField(1, required=True) when = messages.IntegerField(2)
The note message is defined by two fields,
text and
when. Each field has a specific type. The text field is a unicode string representing the content of a user's post to the guestbook page. The
when field is an integer representing the post's timestamp. In defining the string, we also:
- Give each field a unique numerical value (
1for
textand
2for
when) that the underlying network protocol uses to identify the field.
- Make
texta required field. Fields are optional by default, you can mark them as required by setting
required=True. Messages must be initialized by setting required fields to a value. Google Protocol RPC service methods accept only properly initialized messages.
You can set values for the fields using the constructor of the Note class:
# Import the standard time Python library to handle the timestamp. import time note_instance = Note(text=u'Hello guestbook!', when=int(time.time()))
You can also read and set values on a message like normal Python attribute values. For example, to change the message:
print note_instance.text note_instance.text = u'Good-bye guestbook!' print note_instance.textwhich outputs the following
Hello guestbook! Good-bye guestbook!
Defining a Service
A service is a class definition that inherits from the Service base-class. Remote methods of a service are indicated by using the
remote decorator. Every method of a service accepts a single message as its parameter and returns a single message as its response.
Let's define the first method of the PostService. Add the following to your
postservice.py file:
import datetime from protorpc import message_types from protorpc import remote import guestbook class PostService(remote.Service): # Add the remote decorator to indicate the service methods @remote.method(Note, message_types.VoidMessage) def post_note(self, request): # If the Note instance has a timestamp, use that timestamp if request.when is not None: when = datetime.datetime.utcfromtimestamp(request.when) # Else use the current time else: when = datetime.datetime.now() note = guestbook.Greeting(content=request.text, date=when, parent=guestbook.guestbook_key) note.put() return message_types.VoidMessage()
The
remote decorator takes two parameters:
- The expected request type. The post_note() method accepts a Note instance as its request type.
- The expected response type. The Google Protocol RPC library comes with a built-in type called a VoidMessage (in the
protorpc.message_typesmodule), which is defined as a message with no fields. This means that the post_note() message does not return anything useful to its caller. If it returns without error, the message is considered to have been posted.
Since
Note.when is an optional field, it may not have been set by the caller. When this happens, the value of
when is set to None. When
Note.when is set to None, post_note() creates the timestamp using the time it received the message.
The response message is instantiated by the remote method and becomes the remote method's return value.
Registering the Service
You can publish your new service as a WSGI application using the
protorpc.wsgi.service library. Create a new file called
services.py in your application directory and add the following code to create your service:
from protorpc.wsgi import service import postservice # Map the RPC service and path (/PostService) app = service.service_mappings([('/PostService', postservice.PostService)])
Now, add the following handler to your
app.yaml file above the existing catch-all entry:
- url: /PostService.* script: services.app - url: .* script: guestbook.app
Testing the Service from the Command Line
Now that you've created the service, you can test it using
curl or a similar command-line tool.
# After starting the development web server: # NOTE: ProtoRPC always expect a POST. % curl -H \ 'content-type:application/json' \ -d '{"text": "Hello guestbook!"}'\
An empty JSON response indicates that the note posted successfully. You can see the note by going to your guestbook application in your browser ().
Adding Message Fields
Now that we can post messages to the PostService, let's add a new method to get messages from the PostService. First, we'll define a request message in
postservice.py that defines some defaults and a new enum field that tells the server how to order notes in the response. Define it above the
PostService class that you defined earlier:
class GetNotesRequest(messages.Message): limit = messages.IntegerField(1, default=10) on_or_before = messages.IntegerField(2) class Order(messages.Enum): WHEN = 1 TEXT = 2 order = messages.EnumField(Order, 3, default=Order.WHEN)
When sent to the PostService, this message requests a number of notes on or before a certain date and in a particular order. The
limit field indicates the maximum number of notes to fetch. If not explicitly set,
limit defaults to 10 notes (as indicated by the
default=10 keyword argument).
The order field introduces the EnumField class, which enables the
enum field type when the value of a field is restricted to a limited number of known symbolic values. In this case, the
enum indicates to the server how to order notes in the response. To define the enum values, create a sub-class of the Enum class. Each name must be assigned a unique number for the type. Each number is converted to an instance of the enum type and can be accessed from the class.
print 'Enum value Order.%s has number %d' % (GetNotesRequest.Order.WHEN.name, GetNotesRequest.Order.WHEN.number)
Each
enum value has a special characteristic that makes it easy to convert to their name or their number. Instead of accessing the name and number attribute, just convert each value to a string or an integer:
print 'Enum value Order.%s has number %d' % (GetNotesRequest.Order.WHEN, GetNotesRequest.Order.WHEN)
Enum fields are declared similarly to other fields except they must have the enum type as its first parameter before the field number. Enum fields can also have default values.
Defining the Response Message
Now let's define the get_notes() response message. The response needs to be a collection of Note messages. Messages can contain other messages. In the case of the
Notes.notes field defined below, we indicate that it is a collection of messages by providing the
Note class as the first parameter to the
messages.MessageField constructor (before the field number):
class Notes(messages.Message): notes = messages.MessageField(Note, 1, repeated=True)
The
Notes.notes field is also a repeated field as indicated by the
repeated=True keyword argument. Values of repeated fields must be lists of the field type of their declaration. In this case,
Notes.notes must be a list of Note instances. Lists are automatically created and cannot be assigned to None.
For example, here is how to create a Notes object:
response = Notes(notes=[Note(text='This is note 1'), Note(text='This is note 2')]) print 'The first note is:', response.notes[0].text print 'The second note is:', response.notes[1].text
Implement get_notes
Now we can add the get_notes() method to the PostService class:
import datetime import time from protorpc import remote class PostService(remote.Service): @remote.method(GetNotesRequest, Notes) def get_notes(self, request): query = guestbook.Greeting.query().order(-guestbook.Greeting.date) if request.on_or_before: when = datetime.datetime.utcfromtimestamp( request.on_or_before) query = query.filter(guestbook.Greeting.date <= when) notes = [] for note_model in query.fetch(request.limit): if note_model.date: when = int(time.mktime(note_model.date.utctimetuple())) else: when = None note = Note(text=note_model.content, when=when) notes.append(note) if request.order == GetNotesRequest.Order.TEXT: notes.sort(key=lambda note: note.text) return Notes(notes=notes) | https://cloud.google.com/appengine/docs/standard/python/tools/protorpc/?hl=ar | CC-MAIN-2020-10 | refinedweb | 2,028 | 50.63 |
In this tutorial we are going to teach you to use the Files class of Java 8 for reading a file line by line.
AdsTutorials
Today we will discuss the new API which is added to Java 8 for reading a file line by line. The program discussed here is reading file using this new API and it reads the file using the Buffered stream making it efficient program. This version of example code will use the minimum memory as it reads file line by line and there will be no out of memory exception in your program.
Adding of new methods to the java.nio.file.Files class to achieve the efficient reading to files is a good feature of Java 8.
In Java 8 the class java.nio.file.Files was modified and new methods were added to enable the class to read files using stream.
In Java 8 updates Files class is present in the java.nio.file package. Here are the details of method which is used for reading file in Java:
Class: java.nio.file.Files
This class is used for performing various operations on the files and directories.
The java.nio.file.Files class provides static methods that operate on files, directories and any other file type.
Files class of Java 8 provides following methods which can be used for reading file line by line:
Files.lines():
This method is used to read all lines from a file using the stream. This method is used to read file line by line with the use of stream which makes it an efficient class for reading file line by line.
The lines() method of Files class works by reading Byte from the file and then reads into character using the UTF-8 character encoding.
Following is example of reading file line by line using the Files.lines() method:
package net.roseindia; import java.io.File; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; public class ReadWithFilesLines { public static void main(String[] args) { System.out.println("Java 8 Read File Example with Files.lines() function"); try { Files.lines(new File("data.txt").toPath()) .map(s -> s.trim()) .filter(s -> !s.isEmpty()) .forEach(System.out::println); } catch (IOException e) { e.printStackTrace(); } } }
In the above example code we are using the Files.lines() function line by line and then prints the data on the console.
Reading file by applying the filter in Java 8
Following is the example of reading file line by line in Java 8 using the function Files.lines() by applying the filter:
package net.roseindia; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; public class ReadWithFilesLinesWithFilters { public static void main(String[] args) { System.out.println("Example of Reading file in Java 8 using Files.lines() by applying filter"); try { Files.lines(Paths.get("data.txt")).forEach(System.out::println); } catch (IOException e) { e.printStackTrace(); } } }
More examples of reading files in Java:
Advertisements
Posted on: April 5, 2017 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Read file line by line in Java 8
Post your Comment | http://www.roseindia.net/java/beginners/read-file-line-by-line-in-java-8.shtml | CC-MAIN-2017-17 | refinedweb | 536 | 67.86 |
Memory Trainer in Python
Introduction: Memory Trainer in Python
This program starts of by creating a list of words. It then shuffles the list and asks you how many words you would like to get tested on and how many seconds you would like per word
Then it checks your results and you get a score :)
If anyone out there wants to collaborate I will let you edit and upload new code!
Program starts here:
import random, time, sys, os
wordList = ['book','cook','table','car','pen','stand', 'carpet','computer','soap', 'ball','van','tub', 'bat','bed','book','boy','bun','can','cake','cap','cat', 'cow','cub','cup','dad','day','dog','doll','dust','fan','feet', 'girl','gun','hall','hat','hen','jar','kite','man','map','men', 'mom','pan','pet','pie','pig','pot','rat','son','sun','toe', 'marshmallow','steak','butterfly', 'fire','chocolate','banana', 'bunny','bubbles','rock','panda','kitten','meerkat', 'crazy', 'dinosaur', 'coffee']
random.shuffle(wordList)
points = 0
print("how many words do you want to test yourself on")
testLength = int(input()) # how many words do tou want to test?
print("how many seconds per word")
timePerWord = int(input()) # how much time do you want per word
print("These are the words that you have to memorise:")
def spaceOut():
for i in range(0,14):
print('*')
for i in range(0,testLength):
spaceOut()
print(wordList[i])
spaceOut()
time.sleep(timePerWord)
print("now its your turn to guess all the words in the correct sequence")
for i in range(0,testLength):
yourGuess = input()
if yourGuess ==wordList[i]:
points = points + 1
print("Correct!!")
else:
print("Nope, the answer is " + wordList[i])
print("your scored " + str(points) + " out of " + str(testLength))
time.sleep(6)
I checked out your code and tried to run it but it always gave me an error. I am by no means an expert but I tweaked the code just a bit to make it more aesthetically pleasing for me. I just started learning python 2 weeks ago out of curiosity and passion. I don't want to come off as someone who is cocky or showboating so please do not be offended by what I have done. I changed line 36's code because that was what gave the error this whole time. Everything else i did was just for fun.
yourGuess = input() to yourGuess = raw_input(). | http://www.instructables.com/id/Memory-Trainer-in-Python/ | CC-MAIN-2017-30 | refinedweb | 382 | 58.21 |
Strange things can happen when working with characters. It is
important to understand why problems occur and what can be done about
them. This post is about getting Unicode to work at the Windows command
prompt (
cmd.exe).
Topics:
- "Penny wise and pound foolish" - a character corruption example
- Character encodings and code pages
- Switching the Windows console to Windows-1252
- Unicode
- Unicode and the Windows console
- .Net and the Windows console
- Java and the Windows console
- End notes
This article requires your browser to be able to display
Unicode characters. E.g.
я == я - if you
see a question mark there instead of a Cyrillic grapheme (
), some of this article may not make as much
sense.
£
"Penny wise and pound foolish" - a character corruption example
Lets look at the pound symbol
(£ - the currency symbol) on Windows XP configured with British
English regional settings. Most native English-speakers wouldn't regard
this as a particularly exotic character. If it isn't on your keyboard,
you can type it by holding down the right
Alt key and
typing
0163 on the numeric keypad (assuming you're using a
Western European/US configured Windows). This may require some finger
gymnastics with custom function keys on laptops. Alternatively, you can
cut and paste it using the Windows Character Map
app (type
charmap at the command prompt).
If we save this data using Notepad and dump it at the console,
the pound symbol is not printed. Instead, we get a lower case
u
with acute (ú).
C:\demo>TYPE plaintext.txt abcú
If we copy the file to another machine (Ubuntu 8.10, British regional settings) and dump it to a console, we just get an error question mark symbol.
~$ cat plaintext.txt abc?
However, when we open the file using the GNOME text editor (gedit), everything looks fine - the pound symbol appears.
Terminology: the big, bold name for the
#
symbol is NUMBER SIGN (though it is also known as: pound sign; hash;
crosshatch; octothorpe; etc.). The big, bold name for
£
is POUND SIGN (also known as: pound sterling; Irish punt; Italian lira;
Turkish lira; etc.).
Character encodings and code pages
To understand the problem, it is best to look at the hexadecimal form of the text file.
a b c £ 61 62 63 A3
Each of the programs is interpreting these values differently.
You can look up values on the 8bit charts by taking the hex value and matching the first digit to the left vertical column and the second digit to the top horizontal column. Note that the mappings in UTF-8 are not so simple; only a limited subset of characters can be represented in one byte; the remainder may need up to four bytes.
The character
a has the same value in all these
mappings. This is a convenience mechanism used in the design of many
encodings (see the range of ASCII
printable characters) and is probably where the myth that there is such
a thing as "plain text" comes from. In the case of the pound character,
it does not share the same value in all encodings. It is represented by
a different value in some (e.g. Cp850; UTF-8) or may not be present
(e.g. Windows-1250;
Windows-1251).
Applications display the mappings for the encoding they use:
A3
maps to
ú in Cp850; the value
A3 does
not map to a valid UTF-8 character, so an error symbol is displayed (a
question mark is often used for errors).
Switching the Windows console to Windows-1252
Two steps are required to get the pound symbol to display in the Windows XP console when encoded as Windows-1252.
- Switch the console from the raster font to a Unicode TrueType font.
- Type
chcp 1252to switch code page.
C:\demo>chcp 1252 Active code page: 1252 C:\demo>TYPE plaintext.txt abc£
Unicode
Unicode is a single, unified character set that can describe any character. Prior to this, Windows developers juggled single-byte sets (like Windows-1252), double-byte sets (like the Korean code page 949) or used home grown solutions (like LMBCS). Despite standardisation on Unicode, we will be dealing with legacy character issues for a long time to come.
The Unicode character set assigns each character a unique value (see code charts). Encodings (e.g. UTF-8; UTF-16) describe how these values are mapped to byte values (using different bit values as a basis; e.g. 8bit; 16bit).
- UTF-8 is popular for text encoding because some binary values are common to many other character sets, particularly English/Latin characters. This is useful for working with apps that predate Unicode, or don't support it.
- Since the most useful characters can be described in 16bits, it is a popular size for
chardata types (Java and .Net use UTF-16).
- Even though it uses 16bits as a basis, UTF-16 can result in smaller files than UTF-8. It depends on the values being stored.
- Unicode introduces a host of new features and gotchas. For example, some sequences of characters form combining character sequences. The Devanagari characters
0915(क),
094D(्),
0924(त) and
0941(ु) combine to form the grapheme क्तु.
Unicode and the Windows console
Under Windows NT and CE based versions of Windows, the screen buffer uses four bytes per character cell: two bytes for character code, two bytes for attributes. The character is then encoded as Unicode (UTF-16).".
It is best to demonstrate by looking at the Win32 API. If you've never dealt with Windows programming, here are some of the salient points:
- Here,
charis 8 bits and
wchar_tis 16 - "ANSI" and "Unicode" (or "wide") in Windows parlance. You won't often see strings expressed in these terms - they are often hidden behind types like LPCTSTR.
- Functions often come in three flavours -
FunctionAfor ANSI (e.g.
LPCSTR),
FunctionWfor wide chars (e.g.
LPCWSTR), and plain
Functionfor using macros to make it a compile time decision (e.g.
LPCTSTR).
#include "windows.h" void writeAnsiChars(HANDLE stdout) { // SetConsoleOutputCP(1252); char *ansi_pound = "\xA3"; //A3 == pound character in Windows-1252 WriteConsoleA(stdout, ansi_pound, strlen(ansi_pound), NULL, NULL); } void writeUnicodeChars(HANDLE stdout) { wchar_t *arr[] = { L"\u00A3", //00A3 == pound character in UTF-16 L"\u044F", //044F == Cyrillic Ya in UTF-16 L"\r\n", //CRLF 0 }; for(int i=0; arr[i] != 0; i++) { WriteConsoleW(stdout, arr[i], wcslen(arr[i]), NULL, NULL); } } int main() { HANDLE stdout = GetStdHandle(STD_OUTPUT_HANDLE); if(INVALID_HANDLE_VALUE == stdout) return 1; writeAnsiChars(stdout); writeUnicodeChars(stdout); return 0; }
The above code uses two mechanisms to emit strings: the pound
symbol is printed using the ANSI API; then via the Unicode API with a
Cyrillic Ya (and a new line). You can uncomment the
SetOutputConsoleCp
call to avoid having to use
chcp beforehand. Here is the
code compiled with MinGW:
C:\demo>chcp Active code page: 1252 C:\demo>g++ printchars.cpp -o printchars.exe C:\demo>printchars.exe ££я
Alas, things still aren't perfect. The Lucida
Console font can display Cyrillic, but doesn't include the graphemes
required for many characters (e.g. HALFWIDTH KATAKANA LETTER NU -
\uFF87).
Note: I didn't try using the Standard
C++ Library
std::wstring with
std::wcout - support isn't
there yet for MinGW (GCC version 3.4.2). I would probably have had
better luck with Visual
Studio. Take the claims of UTF-8 support in the Windows console with a
pinch of salt. Batch files won't run if you set the console to this
mode.
.Net and the Windows console
Calling the Unicode API is not what .Net does. The console needs to be set to UTF-8 to get Unicode support. C# code:
public class PrintCharsNet { public static void Main(System.String[] args) { System.Console.Write("\u00A3\u044F"); } }
Note that the characters are denoted by Unicode
escape sequences (
\u00A3 == £). This code (compiled
here with Mono) can
reliably print the pound symbol regardless of the code page the console
is set to. However, Ya requires a bigger character set.
C:\demo>mcs PrintCharsNet.cs C:\demo>chcp Active code page: 850 C:\demo>PrintCharsNet.exe £? C:\demo>chcp 1252 Active code page: 1252 C:\demo>PrintCharsNet.exe £? C:\demo>chcp 65001 Active code page: 65001 C:\demo>PrintCharsNet.exe £я
Emulating this behaviour in C++:
#include "windows.h" int main() { HANDLE stdout = GetStdHandle(STD_OUTPUT_HANDLE); if(INVALID_HANDLE_VALUE == stdout) return 1; UINT codepage = GetConsoleOutputCP(); wchar_t *unicode = L"\u00A3\u044F"; int lenW = wcslen(unicode); int lenA = WideCharToMultiByte(codepage, 0, unicode, lenW, 0, 0, NULL, NULL); char *ansi = new char[lenA + 1]; WideCharToMultiByte(codepage, 0, unicode, lenW, ansi, lenA, NULL, NULL); WriteFile(stdout, ansi, lenA, NULL, NULL); delete[] ansi; return 0; }
Java and the Windows console
Java code to print the pound and Ya characters:
public class PrintCharsJava { public static void main(String[] args) throws java.io.IOException { System.out.print("\u00A3\u044F"); } }
In Java, characters printed to
System.out are
encoded using the default platform encoding, a property based on
operating system settings. For historical reasons, this is likely to be
a legacy encoding.
On the test operating system, this is
Cp1252 (list
of Java encoding aliases).
C:\demo>javac PrintCharsJava.java C:\demo>chcp Active code page: 850 C:\demo>java -cp . PrintCharsJava ú? C:\demo>chcp 1252 Active code page: 1252 C:\demo>java -cp . PrintCharsJava £? C:\demo>chcp 65001 Active code page: 65001 C:\demo>java -cp . PrintCharsJava ??
Emulating this behaviour with C++:
#include "windows.h" int main() { HANDLE stdout = GetStdHandle(STD_OUTPUT_HANDLE); if(INVALID_HANDLE_VALUE == stdout) return 1; wchar_t *unicode = L"\u00A3\u044F"; int lenW = wcslen(unicode); int lenA = WideCharToMultiByte(CP_ACP, 0, unicode, lenW, 0, 0, NULL, NULL); char *ansi = new char[lenA + 1]; // CP_ACP == default system ANSI code page WideCharToMultiByte(CP_ACP, 0, unicode, lenW, ansi, lenA, NULL, NULL); WriteFile(stdout, ansi, lenA, NULL, NULL); delete[] ansi; return 0; }
Ordinarily, the pound character will only display if the console is also using the default system encoding. It is possible to override the encoding:
C:\demo>chcp 65001 Active code page: 65001 C:\demo>java -Dfile.encoding=UTF-8 PrintCharsJava £яя
Setting
file.encoding on the command line is not
recommended; doing so may adversely affect other encoding and I/O
operations. You can encode output programmatically too, but it doesn't
appear to be a very reliable approach - you'll note that, either way,
too many characters are written to the console (there should be only one
я).
//a programmatic approach byte[] encoded = "\u00A3\u044F".getBytes("UTF-8"); System.out.write(encoded);
It isn't clear why too many characters appear in the output. Maybe it is a JRE bug; maybe I'm just missing something. It looks like native calls are the best way to get Unicode from Java to the Windows console.
End notes
This article only discusses output. Further reading is required to handle input.
Although this code was written on Windows XP, cursory testing on
the public Windows 7 Beta suggests that not much has changed for
cmd.exe.
I'm being bad - I don't tell my compilers what encoding the
source files should be decoded as! I'm sticking to values I know are
common to many character encodings, including my default Windows-1252,
so I know they are going to be decoded correctly. But this is why I tend
to write
\u00A3 in my code instead of the literal
£.
Links:
- Code Page Identifiers (Windows)
- Code page tables supported by Windows
- SetConsoleOutputCP Only Effective with Unicode Fonts
- Necessary criteria for fonts to be available in a command window
- Why are console windows limited to Lucida Console and raster fonts?
- chcp can't do everything
In general, Raymond Chen's The Old New Thing and Michael Kaplan's Sorting it all Out are great blogs to search for Windows-related gems.
Versions used:
Thank you so much. I have searched all day for this info. I was thinking that there was something wrong with my simple c++ program that needed to output german characters.
I looked all over for a way to tell the program that the characters I was using was latin-1. When I couldn't find anything, I switched to looking at codepages but that still didn't work. It came down to the rasterfont part, that I didn't know about. Geez, this character set thing is so screwed up....
Michael Kaplan has posted a round-up of articles related to cmd.exe and character handling in Myth busting in the console.
Fantastic article!A great blog all in all!
Here is the equivelant example in Perl.
I can display a utf-16le encoded file just fine using type command in windows console, without calling chcp first. Why can't Java console i/o commands do the same?
TYPE is a shell command built into cmd.exe. Microsoft-generated text files are generally prefixed with a byte-order-mark. I expect cmd.exe detects this and uses WriteConsoleW (Unicode) or some internal equivalent to write the data to the device. cmd.exe is in a position to make more decisions about how to interpret its I/O.
I expect System.out maps onto STDOUT by default. You don't know what STDOUT is redirected to (a file, another program, etc.) It would be difficult to bake in any intelligent decision-making into System.out - it's just a byte stream with an encoder for the char methods. Choosing UTF-16 as the encoding would not help as cmd.exe doesn't accept UTF-16 via the standard I/O handles (and in any case, Windows 95's command.com probably didn't support Unicode so you're talking about breaking changes to behaviour.)
A more sensible candidate for adding Unicode console support would have been via the Console type. You can only acquire it under a terminal and it already interprets the console's legacy encodings. As to why it couldn't use the Unicode I/O methods - that's a question for the JDK maintainers.
Great article, but what are you doing if you want to use Hebrew, or any other local language which is not English?!?
Unfortunately, even if your system supporting the language, the command shell's properties showing only those 2 options (Lucida Console, Raster Font), both do not support Hebrew!
In this case you might need this article, which will help you in a preview step (changing the Registry and adding other fonts to the list) | http://illegalargumentexception.blogspot.co.uk/2009/04/i18n-unicode-at-windows-command-prompt.html | CC-MAIN-2018-22 | refinedweb | 2,403 | 63.59 |
- Preface
-
-
-
-
-
-
-
The DNS Update protocol (RFC 2136) integrates DNS with DHCP. The latter two protocols are complementary; DHCP centralizes and automates IP address allocation, while DNS automatically records the association between assigned addresses and hostnames. When you use DHCP with DNS update, this configures a host automatically for network access whenever it attaches to the IP network. You can locate and reach the host using its unique DNS hostname. Mobile hosts, for example, can move freely without user or administrator intervention.
This chapter explains how to use DNS update with Cisco Network Registrar servers, and its special relevance to Windows client systems.
DNS Update Process
Special DNS Update Considerations
DNS Update for DHCPv6
Creating DNS Update Configurations
Creating DNS Update Maps
Configuring Access Control Lists and Transaction Security
Configuring DNS Update Policies
Confirming Dynamic Records
Scavenging Dynamic Records
Troubleshooting DNS Update
Configuring DNS Update for Windows Clients
To configure DNS updates, you must:
1.
Create a DNS update configuration for a forward or reverse zone or both. See the "Creating DNS Update Configurations" section.
2.
Use this DNS update configuration in either of two ways:
–
Specify the DNS update configuration on a named, embedded, or default DHCP policy. See the "Creating and Applying DHCP Policies" section on page 21-3.
–
Define a DNS update map to autoconfigure a single DNS update relationship between a Cisco Network Registrar DHCP server or failover pair and a DNS server or High-Availability (HA) pair. Specify the update configuration in the DNS update map. See the "Creating DNS Update Maps" section.
3.
Optionally define access control lists (ACLs) or transaction signatures (TSIGs) for the DNS update. See the "Configuring Access Control Lists and Transaction Security" section.
4.
Optionally create one or more DNS update policies based on these ACLs or TSIGs and apply them to the zones. See the "Configuring DNS Update Policies" section.
5.
Adjust the DNS update configuration for Windows clients, if necessary; for example, for dual zone updates. See the "Configuring DNS Update for Windows Clients" section.
6.
Configure DHCP clients to supply hostnames or request that Cisco Network Registrar generate them.
7.
Reload the DHCP and DNS servers, if necessary based on the edit mode.
Consider these two issues when configuring DNS updates:
•
For security purposes, the Cisco Network Registrar DNS update process does not modify or delete a name an administrator manually enters in the DNS database.
•
If you enable DNS update for large deployments, and you are not using HA DNS (see Chapter 18, "Configuring High-Availability DNS Servers"), divide primary DNS and DHCP servers across multiple clusters. DNS update generates an additional load on the servers.
Cisco Network Registrar currently supports DHCPv6 DNS update over IPv4 only. For DHCPv6, DNS update applies to nontemporary stateful addresses only, not delegated prefixes.
DNS update for DHCPv6 involves AAAA and PTR RR mappings for leases. Cisco Network Registrar 7.2 supports server- or extension-synthesizing fully qualified domain names and the DHCPv6 client-fqdn option (39).
Because Cisco Network Registrar is compliant with RFCs 4701, 4703, and 4704, it supports the DHCID resource record (RR). All RFC-4703-compliant updaters can generate DHCID RRs and result in data that is a hash of the client identifier (DUID) and the FQDN (per RFC 4701). Nevertheless, you can use AAAA and DHCID RRs in update policy rules.
DNS update processing for DHCPv6 is similar to that for DHCPv4 except that a single FQDN can have more than one lease, resulting in multiple AAAA and PTR RRs for a single client. The multiple AAAA RRs can be under the same name or a different name; however, PTR RRs are always under a different name, based on the lease address. RFC-4703-compliant updaters use the DHCID RR to avoid collisions among multiple clients.
Note
Because DHCPv4 uses TXT RRs and DHCPv6 uses DHCID RRs for DNS update, to avoid conflicts, dual-stack clients cannot use single forward FQDNs. These conflicts primarily apply to client-requested names and not generated names, which are generally unique. To avoid these conflicts, use different zones for the DHCPv4 and DHCPv6 names.
Note
If the DNS server is down and the DHCP server can not complete the DNS updates to remove RRs added for a DHCPv6 lease, the lease continues to exist in the AVAILABLE state. Only the same client reuses the lease.
DHCPv6 Upgrade Considerations
Generating Synthetic Names in DHCPv6
Determining Reverse Zones for DNS Updates
Using the Client FQDN
If you use any policy configured prior to Cisco Network Registrar 7.2 that references a DNS update object for DHCPv6 processing (see the "DHCPv6 Policy Hierarchy" section on page 26-9), after the upgrade, the server begins queuing DNS updates to the specified DNS server or servers. This means that DNS updates might automatically (and unexpectedly) start for DHCPv6 leases.
If clients do not supply hostnames, DHCPv6 includes a synthetic name generator. Because a DHCPv6 client can have multiple leases, Cisco Network Registrar uses a different mechanism than that for DHCPv4 to generate unique hostnames. The v6-synthetic-name-generator attribute for the DNS update configuration allows appending a generated name to the synthetic-name-stem based on the:
•
Hash of the client DHCP Unique Identifier (DUID) value (the preset value).
•
Raw client DUID value (as a hex string with no separators).
•
CableLabs cablelabs-17 option device-id suboption value (as a hex string with no separators, or the hash of the client DUID if not found).
•
CableLabs cablelabs-17 option cm-mac-address suboption value (as a hex string with no separators, or the hash of the client DUID if not found).
See the "Creating DNS Update Configurations" section for how to create a DNS update configuration with synthetic name generation.
In the CLI, an example of this setting is:
nrcmd> dhcp-dns-update example-update-config set v6-synthetic-name-generator=hashed-duid
The DNS update configuration uses the prefix length value in the specified reverse-zone-prefix-length attribute to generate a reverse zone in the ip6.arpa domain. You do not need to specify the full reverse zone, because you can synthesize it by using the ip6.arpa domain. You set this attribute for the reverse DNS update configuration (see the "Creating DNS Update Configurations" section). Here are some rules for reverse-zone-prefix-length:
•
Use a multiple of 4 for the value, because ip6.arpa zones are on 4-bit boundaries. If not a multiple of 4, the value is rounded up to the next multiple of 4.
•
The maximum value is 124, because specifying 128 would create a zone name without any possible hostnames contained therein.
•
A value of 0 means none of the bits are used for the zone name, hence ip6.arpa is used.
•
If you omit the value from the DNS update configuration, the server uses the value from the prefix or, as a last resort, the prefix length derived from the address value of the prefix (see the "Configuring Prefixes" section on page 26-18).
Note that to synthesize the reverse zone name, the synthesize-reverse-zone attribute must remain enabled for the DHCP server. Thus, the order in which a reverse zone name is synthesized for DHCPv6 is:
1.
Use the full reverse-zone-name in the reverse DNS update configuration.
2.
Base it on the ip6.arpa zone from the reverse-zone-prefix-length in the reverse DNS update configuration.
3.
Base it on the ip6.arpa zone from the reverse-zone-prefix-length in the prefix definition.
4.
Base it on the ip6.arpa zone from the prefix length for the address in the prefix definition.
In the CLI, an example of setting the reverse zone prefix length is:
nrcmd> dhcp-dns-update example-update-config set reverse-zone-prefix-length=32
To create a reverse zone for a prefix in the web UI, the List/Add Prefixes page includes a Create Reverse Zone button for each prefix. (See the "Creating and Editing Prefixes" section on page 26-24.)
The CLI also provides the prefix name createReverseZone [-range] command to create a reverse zone for a prefix (from its address or range value). Delete the reverse zone by using prefix name deleteReverseZone [-range].
You can also create a reverse zone from a DHCPv4 subnet or DHCPv6 prefix by entering the subnet or prefix value when directly configuring the reverse zone. See the "Adding Primary Reverse Zones" section on page 15-12 for details.
The existing DHCP server use-client-fqdn attribute controls whether the server pays attention to the DHCPv6 client FQDN option in the request. The rules that the server uses to determine which name to return when multiple names exist for a client are in the following order of preference:
1.
The server FQDN that uses the client requested FQDN if it is in use for any lease (even if not considered to be in DNS).
2.
The FQDN with the longest valid lifetime considered to be in DNS.
3.
The FQDN with the longest valid lifetime that is not yet considered to be in DNS.
A DNS update configuration defines the DHCP server framework for DNS updates to a DNS server or HA DNS server pair. It determines if you want to generate forward or reverse zone DNS updates (or both). It optionally sets TSIG keys for the transaction, attributes to control the style of autogenerated hostnames, and the specific forward or reverse zone to be updated. You must specify a DNS update configuration for each unique server relationship.
For example, if all updates from the DHCP server are directed to a single DNS server, you can create a single DNS update configuration that is set on the server default policy. To assign each group of clients in a client-class to a corresponding forward zone, set the forward zone name for each in a more specific client-class policy.
Step 1
From the DHCP menu, choose DNS Updates to open the List/Add DNS Update Configurations page.
Step 2
Click Add DNS Update Configuration to open the Add DNS Update Configuration page.
Step 3
Enter a name for the update configuration in the name attribute field.
Step 4
Click the appropriate dynamic-dns setting:
•
update-none—Do not update forward or reverse zones.
•
update-all—Update forward and reverse zones (the default value).
•
update-fwd-only—Update forward zones only.
•
update-reverse-only—Update reverse zones only.
Step 5
Set the other attributes appropriately:
a.
If necessary, enable synthesize-name and set the synthetic-name-stem value.
You can set the stem of the default hostname to use if clients do not supply hostnames, by using synthetic-name-stem. For DHCPv4, enable the synthesize-name attribute to trigger the DHCP server to synthesize unique names for clients based on the value of the synthetic-name-stem. The resulting name is the name stem appended with the hyphenated IP address. For example, if you specify a synthetic-name-stem of host for address 192.168.50.1 in the example.com domain, and enable the synthesize-name attribute, the resulting hostname is host-192-168-50-1.example.com. The preset value for the synthetic name stem is dhcp.
The synthetic-name-stem must:
•
Be a relative name without a trailing dot.
•
Include alphanumeric values and hyphens (-) only. Space characters and underscores become hyphens and other characters are removed.
•
Include no leading or trailing hyphen characters.
•
Have DNS hostnames of no more than 63 characters per label and 255 characters in their entirety. The algorithm uses the configured forward zone name to determine the number of available characters for the hostname, and truncates the end of the last label if necessary.
For DHCPv6, see the "Generating Synthetic Names in DHCPv6" section.
b.
Set forward-zone-name to the forward zone, if updating forward zones. Note that the policy forward-zone-name takes precedence over the one set in the DNS update configuration.
For DHCPv6, the server ignores the client and client-class policies when searching for a forward-zone-name value in the policy hierarchy. The search for a forward zone name begins with the prefix embedded policy.
c.
For DHCPv4, set reverse-zone-name to the reverse (in.addr.arpa) zone to be updated with PTR and TXT records. If unset and the DHCP server synthesize-reverse-zone attribute is enabled, the server synthesizes a reverse zone name based on the address of each lease, scope subnet number, and DNS update configuration (or scope) dns-host-bytes attribute value.
The dns-host-bytes value controls the split between the host and zone parts of the reverse zone name. The value sets the number of bytes from the lease IP address to use for the hostname; the remaining bytes are used for the in-addr.arpa zone name. A value of 1 means use just one byte for the host part of the domain and the other three from the domain name (reversed). A value of 4 means use all four bytes for the host part of the address, thus using just the in-addr.arpa part of the domain. If unset, the server synthesizes an appropriate value based on the scope subnet size, or if the reverse-zone-name is defined, calculates the host bytes from this name.
For DHCPv6, see the "Determining Reverse Zones for DNS Updates" section.
d.
Set server-addr to the IP address of the primary DNS server for the forward zone (or reverse zone if updating reverse zones only.
e.
Set server-key and backup-server-key if you are using a TSIG key to process all DNS updates (see the "Transaction Security" section).
f.
Set backup-server-addr to the IP address of the backup DNS server, if HA DNS is configured.
g.
If necessary, enable or disable update-dns-first (preset value disabled) or update-dns-for-bootp (preset value enabled). The update-dns-first setting controls whether DHCP updates DNS before granting a lease. Enabling this attribute is not recommended.
Step 6
At the regional level, you can also push update configurations to the local clusters, or pull them from the replica database on the List/Add DNS Update Configurations page.
Step 7
Click Add DNS Update Configuration.
Step 8
To specify this DNS update configuration on a policy, see the "Creating and Applying DHCP Policies" section on page 21-3.
Use dhcp-dns-update name create. For example:
nrcmd> dhcp-dns-update example-update-config create
Set the dynamic-dns attribute to its appropriate value (update-none, update-all, update-fwd-only, or update-reverse-only). For example:
nrcmd> dhcp-dns-update example-update-config set dynamic-dns=update-all
DNS Update Process
Special DNS Update Considerations
DNS Update for DHCPv6
A DNS update map facilitates configuring DNS updates so that the update properties are synchronized between HA DNS server pairs or DHCP failover server pairs, based on an update configuration, so as to reduce redundant data entry. The update map applies to all the primary zones that the DNS pairs service, or all the scopes that the DHCP pairs service. You must specify a policy for the update map. To use this function, you must be an administrator assigned the server-management subrole of the dns-management or central-dns-management role, and the dhcp-management role (for update configurations).
Step 1
From the DNS menu, choose Update Maps to open the List/Add DNS Update Maps page.
Step 2
Click Add DNS Update Map to open the Add DNS Update Map page.
Step 3
Enter a name for the update map in the Name field.
Step 4
Enter the DNS update configuration from the previous section in the dns-config field.
Step 5
Set the kind of policy selection you want for the dhcp-policy-selector attribute. The choices are:
•
use-named-policy—Use the named policy set for the dhcp-named-policy attribute (the preset value).
•
use-client-class-embedded-policy—Use the embedded policy from the client-class set for the dhcp-client-class attribute.
•
use-scope-embedded-policy—Use the embedded policy from the scope.
Step 6
If using update ACLs (see the "Configuring Access Control Lists and Transaction Security" section) or DNS update policies (see the "Configuring DNS Update Policies" section), set either the dns-update-acl or dns-update-policy-list attribute. Either value can be one or more addresses separated by commas. The dns-update-acl takes precedence over the dns-update-policy-list.
If you omit both values, a simple update ACL is constructed whereby only the specified DHCP servers or failover pair can perform updates, along with any server-key value set in the update configuration specified for the dns-config attribute.
Step 7
Click Add DNS Update Map.
Step 8
At the regional level, you can also push update maps to the local clusters, or pull them from the replica database on the List/Add DNS Update Maps page.
Specify the name, cluster of the DHCP and DNS servers (or DHCP failover or HA DNS server pair), and the DNS update configuration when you create the update map, using dns-update-map name create dhcp-cluster dns-cluster dns-config. For example:
nrcmd> dns-update-map example-update-map create Example-cluster Boston-cluster
example-update-config
Set the dhcp-policy-selector attribute value to use-named-policy, use-client-class-embedded-policy, or use-scope-embedded-policy. If using the use-named-policy value, also set the dhcp-named-policy attribute value. For example:
nrcmd> dns-update-map example-update-map set dhcp-policy-selector=use-named-policy
nrcmd> dns-update-map example-update-map set dhcp-named-policy=example-policy
ACLs are authorization lists, while transaction signatures (TSIG) is an authentication mechanism:
•
ACLs enable the server to allow or disallow the request or action defined in a packet.
•
TSIG ensures that DNS messages come from a trusted source and are not tampered with.
For each DNS query, update, or zone transfer that is to be secured, you must set up an ACL to provide permission control. TSIG processing is performed only on messages that contain TSIG information. A message that does not contain, or is stripped of, this information bypasses the authentication process.
For a totally secure solution, messages should be authorized by the same authentication key. For example, if the DHCP server is configured to use TSIG for DNS updates and the same TSIG key is included in the ACL for the zones to be updated, then any packet that does not contain TSIG information fails the authorization step. This secures the update transactions and ensures that messages are both authenticated and authorized before making zone changes.
ACLs and TSIG play a role in setting up DNS update policies for the server or zones, as described in the "Configuring DNS Update Policies" section.
Access Control Lists
Configuring Zones for Access Control Lists
Transaction Security
You assign ACLs on the DNS server or zone level. ACLs can include one or more of these elements:
•
IP address—In dotted decimal notation; for example, 192.168.1.2.
•
Network address—In dotted decimal and slash notation; for example, 192.168.0.0/24. In this example, only hosts on that network can update the DNS server.
•
Another ACL—Must be predefined. You cannot delete an ACL that is embedded in another one until you remove the embedded relationship. You should not delete an ACL until all references to that ACL are deleted.
•
Transaction Signature (TSIG) key—The value must be in the form key value, with the keyword key followed by the secret value. To accommodate space characters, the entire list must be enclosed in double quotes. For TSIG keys, see the "Transaction Security" section.
You assign each ACL a unique name. However, the following ACL names have special meanings and you cannot use them for regular ACL names:
•
any—Anyone can perform a certain action
•
none—No one can perform a certain action
•
localhost—Any of the local host addresses can perform a certain action
•
localnets—Any of the local networks can perform a certain action
Note the following:
•
If an ACL is not configured, any is assumed.
•
If an ACL is configured, at least one clause must allow traffic.
•
The negation operator (!) disallows traffic for the object it precedes, but it does not intrinsically allow anything else unless you also explicitly specify it. For example, to disallow traffic for the IP address 192.168.50.0 only, use !192.168.50.0, any.
Click DNS, then ACLs to open the List/Add Access Control Lists page. Add an ACL name and match list. Note that a key value pair should not be in quotes. At the regional level, you can additionally pull replica ACLs or push ACLs to local clusters.
Use acl name create match-list, which takes a name and one or more ACL elements. The ACL list is comma-separated, with double quotes surrounding it if there is a space character. The CLI does not provide the pull/push function.
For example, the following commands create three ACLs. The first is a key with a value, the second is for a network, and the third points to the first ACL. Including an exclamation point (!) before a value negates that value, so that you can exclude it in a series of values:
nrcmd> acl sec-acl create "key h-a.h-b.example.com."
nrcmd> acl dyn-update-acl create "!192.168.2.13,192.168.2.0/24"
nrcmd> acl main-acl create sec-acl
To configure ACLs for the DNS server or zones, set up a DNS update policy, then define this update policy for the zone (see the "Configuring DNS Update Policies" section).
Transaction Signature (TSIG) RRs enable the DNS server to authenticate each message that it receives, containing a TSIG. Communication between servers is not encrypted but it becomes authenticated, which allows validation of the authenticity of the data and the source of the packet.
When you configure the Cisco Network Registrar DHCP server to use TSIG for DNS updates, the server appends a TSIG RR to the messages. Part of the TSIG record is a message authentication code.
When the DNS server receives a message, it looks for the TSIG record. If it finds one, it first verifies that the key name in it is one of the keys it recognizes. It then verifies that the time stamp in the update is reasonable (to help fight against traffic replay attacks). Finally, the server looks up the key shared secret that was sent in the packet and calculates its own authentication code. If the resulting calculated authentication code matches the one included in the packet, then the contents are considered to be authentic.
Creating TSIG Keys
Generating Keys
Considerations for Managing Keys
Adding Supporting TSIG Attributes
Note
If you want to enable key authentication for Address-to-User Lookup (ATUL) support, you must also define a key identifier (id attribute value). See the "Setting DHCP Forwarding" section on page 23-24.
From the Administration menu or the DNS menu, choose Keys, to open the List/Add Encryption Keys page.
For a description of the Algorithm, Security Type, Time Skew, Key ID, and Secret values, see Table 28-1. See also the "Considerations for Managing Keys" section.
To edit a TSIG key, click its name on the List/Add Encryption Keys page to open the Edit Encryption Key page.
At the regional level, you can additionally pull replica keys, or push keys to local clusters.
Use key name create secret. Provide a name for the key (in domain name format; for example, hosta-hostb-example.com.) and a minimum of the shared secret as a base-64 encoded string (see Table 28-1 for a description of the optional time skew attribute). An example in the CLI would be:
nrcmd> key hosta-hostb-example.com. create secret-string
It is recommended that you use the Cisco Network Registrar cnr_keygen utility to generate TSIG keys so that you add them or import them using import keys.
Execute the cnr_keygen key generator utility from a DOS prompt, or a Solaris or Linux shell:
•
On Windows, the utility is in the install-path\bin folder.
•
On Solaris and Linux, the utility is in the install-path/usrbin directory.
An example of its usage (on Solaris and Linux) is:
> /opt/nwreg2/local/usrbin/cnr_keygen -n a.b.example.com. -a hmac-md5 -t TSIG -b 16
-s 300
key "a.b.example.com." {
algorithm hmac-md5;
secret "xGVCsFZ0/6e0N97HGF50eg==";
# cnr-time-skew 300;
# cnr-security-type TSIG;
};
The only required input is the key name. The options are described in Table 28-1.
The resulting secret is base64-encoded as a random string.
You can also redirect the output to a file if you use the right-arrow (>) or double-right-arrow (>>) indicators at the end of the command line. The > writes or overwrites a given file, while the >> appends to an existing file. For example:
> /opt/nwreg2/local/usrbin/cnr_keygen -n example.com > keyfile.txt
> /opt/nwreg2/local/usrbin/cnr_keygen -n example.com >> addtokeyfile.txt
You can then import the key file into Cisco Network Registrar using the CLI to generate the keys in the file. The key import can generate as many keys as it finds in the import file. The path to the file should be fully qualified. For example:
nrcmd> import keys keydir/keyfile.txt
If you generate your own keys, you must enter them as a base64-encoded string (See RFC 4648 for more information on base64 encoding). This means that the only characters allowed are those in the base64 alphabet and the equals sign (=) as pad character. Entering a nonbase64-encoded string results in an error message.
Here are some other suggestions:
•
Do not add or modify keys using batch commands.
•
Change shared secrets frequently; every two months is recommended. Note that Cisco Network Registrar does not explicitly enforce this.
•
The shared secret length should be at least as long as the keyed message digest (HMAC-MD5 is 16 bytes). Note that Cisco Network Registrar does not explicitly enforce this and only checks that the shared secret is a valid base64-encoded string, but it is the policy recommended by RFC 2845.
To add TSIG support for a DNS update configuration (see the "Creating DNS Update Configurations" section), set these attributes:
•
server-key
•
backup-server-key
DNS update policies provide a mechanism for managing update authorization at the RR level. Using update policies, you can grant or deny DNS updates based on rules that are based on ACLs as well as RR names and types. ACLs are described in the "Access Control Lists" section.
Compatibility with Previous Cisco Network Registrar Releases
Creating and Editing Update Policies
Defining and Applying Rules for Update Policies
Previous Cisco Network Registrar releases used static RRs that administrators entered, but that DNS updates could not modify. This distinction between static and dynamic RRs no longer exists. RRs can now be marked as protected or unprotected (see the "Protecting Resource Record Sets" section on page 16-3). Administrators creating or modifying RRs can now specify whether RRs should be protected. A DNS update cannot modify a protected RR set, even if an RR of the given type does not yet exist in the set.
Note
Previous releases allowed DNS updates only to A, TXT, PTR, CNAME and SRV records. This was changed to allow updates to all but SOA and NS records in unprotected name sets. To remain compatible with a previous release, use an update policy to limit RR updates.
Creating an update policy initially involves creating a name for it.
Step 1
From the DNS menu, choose Update Policies to open the List DNS Update Policies page.
Step 2
Click Add Policy to open the Add DNS Update Policy page.
Step 3
Enter a name for the update policy.
Step 4
Proceed to the "Defining and Applying Rules for Update Policies" section.
Use update-policy name create; for example:
nrcmd> update-policy policy1 create
DNS update policies are effective only if you define rules for each that grant or deny updates for certain RRs based on an ACL. If no rule is satisfied, the default (last implicit) rule is to deny all updates ("deny any wildcard * *").
Defining Rules for Named Update Policies
Applying Update Policies to Zones
Defining rules for named update policies involves a series of Grant and Deny statements.
Step 1
Create an update policy, as described in the "Creating and Editing Update Policies" section, or edit it.
Step 2
On the Add DNS Update Policies or Edit DNS Update Policy page:
a.
Enter an optional value in the Index field.
b.
Click Grant to grant the rule, or Deny to deny the rule.
c.
Enter an access control list in the ACL List field.
d.
Choose a keyword from the Keyword drop-down list.
e.
Enter a value based on the keyword in the Value field. This can be a RR or subdomain name, or, if the wildcard keyword is used, it can contain wildcards (see Table 28-2).
f.
Enter one or more RR types, separated by commas, in the RR Types field, or use * for "all RRs." You can use negated values, which are values prefixed by an exclamation point; for example, !PTR.
g.
Click Add Policy.
Step 3
At the regional level, you can also push update policies to the local clusters, or pull them from the replica database on the List DNS Update Policies page.
Step 4
To edit an update policy, click the name of the update policy on the List DNS Update Policies page to open the Edit DNS Update Policy page, make changes to the fields, then click Edit Policy.
Create or edit an update policy (see the "Creating and Editing Update Policies" section, then use update-policy name rules add rule, with rule being the rule. (See Table 28-2 for the rule wildcard values.) For example:
nrcmd> update-policy policy1 rules add "grant 192.168.50.101 name host1 A,TXT" 0
The rule is enclosed in quotes. To parse the rule syntax for the example:
•
grant—Action that the server should take, either grant or deny.
•
192.168.50.101—The ACL, in this case an IP address. The ACL can be one of the following:
–
Name—ACL created by name, as described in the "Access Control Lists" section.
–
IP address, as in the example.
–
Network address, including mask; for example, 192.168.50.0/24.
–
TSIG key—Transaction signature key, in the form key=key, (as described in the "Transaction Security" section.
–
One of the reserved words:
any—Any ACL
none—No ACL
localhost—Any local host addresses
localnets—Any local network address
You can negate the ACL value by preceding it with an exclamation point (!).
•
name—Keyword, or type of check to perform on the RR, which can be one of the following:
–
name—Name of the RR, requiring a name value.
–
subdomain—Name of the RR or the subdomain with any of its RRs, requiring a name or subdomain value.
–
wildcard—Name of the RR, using a wildcard value (see Table 28-2).
•
host1—Value based on the keyword, in this case the RR named host1. This can also be a subdomain name or, if the wildcard keyword is used, can contain wildcards (see Table 28-2).
•
A,TXT—RR types, each separated by a comma. This can be a list of any of the RR types described in Appendix A, "Resource Records." You can negate each record type value by preceding it with an exclamation point (!).
•
Note that if this or any assigned rule is not satisfied, the default is to deny all RR updates.
Tacked onto the end of the rule, outside the quotes, is an index number, in the example, 0. The index numbers start at 0. If there are multiple rules for an update policy, the index serves to add the rule in a specific order, such that lower numbered indexes have priority in the list. If a rule does not include an index, it is placed at the end of the list. Thus, a rule always has an index, whether or not it is explicitly defined. You also specify the index number in case you need to remove the rule.
To replace a rule, use update-policy name delete, then recreate the update policy. To edit a rule, use update-policy name rules remove index, where index is the explicitly defined or system-defined index number (remembering that the index numbering starts at 0), then recreate the rule. To remove the second rule in the previous example, enter:
nrcmd> update-policy policy1 rules remove 1
After creating an update policy, you can apply it to a zone (forward and reverse) or zone template.
Step 1
From the DNS menu, choose Forward Zones to open the List/Add Zones page.
Step 2
Click the name of the zone to open the Edit Zone page.
Tip
You can also perform this function for zone templates on the Edit Zone Template page, and primary reverse zones on the Edit Primary Reverse Zone page (see Chapter 15, "Managing Zones.").
Step 3
Enter the name or (comma-separated) names of one or more of the existing named update policies in the update-policy-list attribute field.
Note
The server processes the update-acl before it processes the update-policy-list.
Step 4
Click Modify Zone.
Use zone name set update-policy-list, equating the update-policy-list attribute with a quoted list of comma-separated update policies, as defined in the "Creating and Editing Update Policies" section. For example:
nrcmd> zone example.com set update-policy-list="policy1,policy2"
The Cisco Network Registrar DHCP server stores all pending DNS update data on disk. If the DHCP server cannot communicate with a DNS server, it periodically tests for re-established communication and submits all pending updates. This test typically occurs every 40 seconds.
Click DNS, then Forward Zones. Click the View icon (
) in the RRs column to open the List/Add DNS Server RRs for Zone page.
Use zone name listRR dns.
Microsoft Windows DNS clients that get DHCP leases can update (refresh) their Address (A) records directly with the DNS server. Because many of these clients are mobile laptops that are not permanently connected, some A records may become obsolete over time. The Windows DNS server scavenges and purges these primary zone records periodically. Cisco Network Registrar provides a similar feature that you can use to periodically purge stale records.
Scavenging is normally disabled by default, but you should enable it for zones that exclusively contain Windows clients. Zones are configured with no-refresh and refresh intervals. A record expires once it ages past its initial creation date plus these two intervals. Figure 28-1 shows the intervals in the scavenging time line.
Figure 28-1 Address Record Scavenging Time Line Intervals
The Cisco Network Registrar process is:
1.
When the client updates the DNS server with a new A record, this record gets a timestamp, or if the client refreshes its A record, this may update the timestamp ("Record is created or refreshed").
2.
During a no-refresh interval (a default value of seven days), if the client keeps sending the same record without an address change, this does not update the record timestamp.
3.
Once the record ages past the no-refresh interval, it enters the refresh interval (also a default value of seven days), during which time DNS updates refresh the timestamp and put the record back into the no-refresh interval.
4.
A record that ages past the refresh interval is available for scavenging when it reaches the scavenge interval.
Note
Only unprotected RRs are scavenged. To keep RRs from being scavenged, set them to protected. However, top-of-zone (@) RRs, even if unprotected, are not scavenged.
The following zone attributes affect scavenging:
•
scvg-interval—Period during which the DNS server checks for stale records in a zone. The value can range from one hour to 365 days. You can also set this for the server (the default value is one week), although the zone setting overrides it.
•
scvg-no-refresh-interval—Interval during which actions, such as dynamic or prerequisite-only DNS updates, do not update the record timestamp. The value can range from one hour to 365 days. The zone setting overrides the server setting (the default value is one week).
•
scvg-refresh-interval—Interval during which DNS updates increment the record timestamp. After both the no-refresh and refresh intervals expire, the record is a candidate for scavenging. The value can range from one hour to 365 days. The zone setting overrides the server setting (the default value is one week).
•
scvg-ignore-restart-interval—Ensures that the server does not reset the scavenging time with every server restart. Within this interval, Cisco Network Registrar ignores the duration between a server down instance and a restart, which is usually fairly short.
The value can range from two hours to one day. With any value longer than that set, Cisco Network Registrar recalculates the scavenging period to allow for record updates that cannot take place while the server is stopped. The zone setting overrides the server setting (the default value is 2 hours.
Enable scavenging only for zones where a Cisco Network Registrar DNS server receives updates exclusively from Windows clients (or those known to do automatic periodic DNS updates). Set the attributes listed above. The Cisco Network Registrar scavenging manager starts at server startup. It reports records purged through scavenging to the changeset database. Cisco Network Registrar also notifies secondary zones by way of zone transfers of any records scavenged from the primary zone. In cases where you create a zone that has scavenging disabled (the records do not have a timestamp) and then subsequently enable it, Cisco Network Registrar uses a proxy timestamp as a default timestamp for each record.
You can monitor scavenging activity using one or more of the log settings scavenge, scavenge-details, ddns-refreshes, and ddns-refreshes-details.
On the Manage DNS Server page, click the Run icon (
) in the Commands column to open the DNS Server Commands page (see Figure 7-1 on page 7-2). On this page, click the Run icon next to Scavenge all zones.
To scavenge a particular forward or reverse zone only, go to the Zone Commands for Zone page, which is available by clicking the Run icon (
) on the List/Add Zones page or List/Add Reverse Zones page. Click the Run icon again next to Scavenge zone on the Zone Commands for Zone page. To find out the next time scavenging is scheduled for the zone, click the Run icon next to Get scavenge start time.
Use dns scavenge for all zones that have scavenging enabled, or zone name scavenge for a specific zone that has it enabled. Use the getScavengeStartTime action on a zone to find out the next time scavenging is scheduled to start.
You can use a standard DNS tool such as dig and nslookup to query the server for RRs. The tool can be valuable in determining whether dynamically generated RRs are present. For example:
$ nslookup
default Server: server2.example.com
Address: 192.168.1.2
> leasehost1.example.com
Server: server2.example.com
Address: 192.168.1.100
> set type=ptr
> 192.168.1.100
Server: server2.example.com
Address: 192.168.1.100
100.40.168.192.in-addr.arpa name = leasehost1.example.com
40.168,192.in-addr.arpa nameserver = server2.example.com
You can monitor DNS updates on the DNS server by setting the log-settings attribute to ddns, or show even more details by setting it to ddns-details.
The Windows operating system rely heavily on DNS and, to a lesser extent, DHCP. This reliance requires careful preparation on the part of network administrators prior to wide-scale Windows deployments. Windows clients can add entries for themselves into DNS by directly updating forward zones with their address (A) record. They cannot update reverse zones with their pointer (PTR) records.
Client DNS Updates
Dual Zone Updates for Windows Clients
DNS Update Settings in Windows Clients
Windows Client Settings in DHCP Servers
SRV Records and DNS Updates
Issues Related to Windows Environments
Frequently Asked Questions About Windows Integration
It is not recommended that clients be allowed to update DNS directly.
For a Windows client to send address record updates to the DNS server, two conditions must apply:
•
The Windows client must have the Register this connection's addresses in DNS box checked on the DNS tab of its TCP/IP control panel settings.
•
The DHCP policy must enable direct updating (Cisco Network Registrar policies do so by default).
The Windows client notifies the DHCP server of its intention to update the A record to the DNS server by sending the client-fqdn DHCP option (81) in a DHCPREQUEST packet. By indicating the fully qualified domain name (FQDN), the option states unambiguously the client location in the domain namespace. Along with the FQDN itself, the client or server can send one of these possible flags in the client-fqdn option:
•
0—Client should register its A record directly with the DNS server, and the DHCP server registers the PTR record (done through the policy allow-client-a-record-update attribute being enabled).
•
1—Client wants the DHCP server to register its A and PTR records with the DNS server.
•
3—DHCP server registers the A and PTR records with the DNS server regardless of the client request (done through the policy allow-client-a-record-update attribute being disabled, which is the default value). Only the DHCP server can set this flag.
The DHCP server returns its own client-fqdn response to the client in a DHCPACK based on whether DNS update is enabled. However, if the 0 flag is set (the allow-client-a-record-update attribute is enabled for the policy), enabling or disabling DNS update is irrelevant, because the client can still send its updates to DNS servers. See Table 28-3 for the actions taken based on how various properties are set.
A Windows DHCP server can set the client-fqdn option to ignore the client request. To enable this behavior in Cisco Network Registrar, create a policy for Windows clients and disable the allow-client-a-record-update attribute for this policy.
The following attributes are enabled by default in Cisco Network Registrar:
•
Server use-client-fqdn—The server uses the client-fqdn value on incoming packets and does not examine the host-name. The DHCP server ignores all characters after the first dot in the domain name value, because it determines the domain from the defined scope for that client. Disable use-client-fqdn only if you do not want the server to determine hostnames from client-fqdn, possibly because the client is sending unexpected characters.
•
Server use-client-fqdn-first—The server examines client-fqdn on incoming packets from the client before examining the host-name option (12). If client-fqdn contains a hostname, the server uses it. If the server does not find the option, it uses the host-name value. If use-client-fqdn-first is disabled, the server prefers the host-name value over client-fqdn.
•
Server use-client-fqdn-if-asked—The server returns the client-fqdn value in the outgoing packets if the client requests it. For example, the client might want to know the status of DNS activity, and hence request that the DHCP server should present the client-fqdn value.
•
Policy allow-client-a-record-update—The client can update its A record directly with the DNS server, as long as the client sets the client-fqdn flag to 0 (requesting direct updating). Otherwise, the server updates the A record based on other configuration properties.
The hostnames returned to client requests vary depending on these settings (see Table 28-4).
Windows DHCP clients might be part of a DHCP deployment where they have A records in two DNS zones. In this case, the DHCP server returns the client-fqdn so that the client can request a dual zone update. To enable a dual zone update, enable the policy attribute allow-dual-zone-dns-update.
The DHCP client sends the 0 flag in client-fqdn and the DHCP server returns the 0 flag so that the client can update the DNS server with the A record in its main zone. However, the DHCP server also directly sends an A record update based on the client secondary zone in the behalf of the client. If both allow-client-a-record-update and the allow-dual-zone-dns-update are enabled, allowing the dual zone update takes precedence so that the server can update the secondary zone A record.
The Windows client can set advanced properties to enable sending the client-fqdn option.
Step 1
On the Windows client, go to the Control Panel and open the TCP/IP Settings dialog box.
Step 2
Click the Advanced tab.
Step 3
Click the DNS tab.
Step 4
To have the client send the client-fqdn option in its request, leave the Register this connection's addresses in DNS box checked. This indicates that the client wants to do the A record update.
You can apply a relevant policy to a scope that includes the Windows clients, and enable DNS updates for the scope.
Step 1
Create a policy for the scope that includes the Windows clients. For example:
a.
Create a policywin2k.
b.
Create a win2k scope with the subnet 192.168.1.0/24 and policywin2k as the policy. Add an address range of 192.168.1.10 through 192.168.1.100.
Step 2
Set the scope attribute dynamic-dns to update-all, update-fwd-only, or update-rev-only.
Step 3
Set the zone name, server address (for A records), reverse zone name, and reverse server address (for PTR records), as described in the "Creating DNS Update Configurations" section.
Step 4
If you want the client to update its A records at the DNS server, enable the policy attribute allow-client-a-record-update (this is the preset value). There are a few caveats to this:
•
If allow-client-a-record-update is enabled and the client sends the client-fqdn with the update bit enabled, the host-name and client-fqdn returned to the client match the client client-fqdn. (However, if the override-client-fqdn is also enabled on the server, the hostname and FQDN returned to the client are generated by the configured hostname and policy domain name.)
•
If, instead, the client does not send the client-fqdn with the update bit enabled, the server does the A record update, and the host-name and client-fqdn (if requested) returned to the client match the name used for the DNS update.
•
If allow-client-a-record-update is disabled, the server does the A record updates, and the host-name and client-fqdn (with the update bit disabled) values returned to the client match the name used for the DNS update.
•
If allow-dual-zone-dns-update is enabled, the DHCP server always does the A record updates. (See the "Dual Zone Updates for Windows Clients" section.)
•
If use-dns-update-prereqs is enabled (the preset value) for the DHCP server or DNS update configuration and update-dns-first is disabled (the preset value) for the update configuration, the hostname and client-fqdn returned to the client are not guaranteed to match the DNS update, because of delayed name disambiguation. However, the lease data will be updated with the new names.
According to RFC 2136, update prerequisites determine the action the primary master DNS server takes based on whether an RR set or name record should or should not exist. Disable use-dns-update-prereqs only under rare circumstances.
Step 5
Reload the DHCP server.
Windows relies heavily on the DNS protocol for advertising services to the network. Table 28-5 describes how Windows handles service location (SRV) DNS RRs and DNS updates.
You can configure the Cisco Network Registrar DNS server so that Windows domain controllers can dynamically register their services in DNS and, thereby, advertise themselves to the network. Because this process occurs through RFC-compliant DNS updates, you do not need to do anything out of the ordinary in Cisco Network Registrar.
To configure Cisco Network Registrar to accept these dynamic SRV record updates:
Step 1
Determine the IP addresses of the devices in the network that need to advertise services through DNS.
Step 2
If they do not exist, create the appropriate forward and reverse zones for the Windows domains.
Step 3
Enable DNS updates for the forward and reverse zones.
Step 4
Set up a DNS update policy to define the IP addresses of the hosts to which you want to restrict accepting DNS updates (see the "Configuring DNS Update Policies" section). These are usually the DHCP servers and any Windows domain controllers. (The Windows domain controllers should have static IP addresses.)
If it is impractical or impossible to enter the list of all the IP addresses from which a DNS server must accept updates, you can configure Cisco Network Registrar to accept updates from a range of addresses, although Cisco does not recommend this configuration.
Step 5
Reload the DNS and DHCP servers.
Table 28-6 describes the issues concerning interoperability between Windows and Cisco Network Registrar, The information in this table is intended to inform you of possible problems before you encounter them in the field. For some frequently asked questions about Windows interoperability, see the "Frequently Asked Questions About Windows Integration" section.
Example 28-1 Output Showing Invisible Dynamically Created RRs
Dynamic Resource Records
_ldap._tcp.test-lab._sites 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp.test-lab._sites.gc._msdcs 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_kerberos._tcp.test-lab._sites.dc._msdcs 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.test-lab._sites.dc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_kerberos._tcp.test-lab._sites 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.pdc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp.gc._msdcs 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_ldap._tcp.1ca176bc-86bf-46f1-8a0f-235ab891bcd2.domains._msdcs 600 IN SRV 0 100 389
CNR-MKT-1.w2k.example.com.
e5b0e667-27c8-44f7-bd76-6b8385c74bd7._msdcs 600 IN CNAME CNR-MKT-1.w2k.example.com.
_kerberos._tcp.dc._msdcs 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.dc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_kerberos._tcp 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_gc._tcp 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_kerberos._udp 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_kpasswd._tcp 600 IN SRV 0 100 464 CNR-MKT-1.w2k.example.com.
_kpasswd._udp 600 IN SRV 0 100 464 CNR-MKT-1.w2k.example.com.
gc._msdcs 600 IN A 10.100.200.2
_gc._tcp.test-lab._sites 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
These questions are frequently asked about integrating Cisco Network Registrar DNS services with Windows:
Q.
What happens if both Windows clients and the DHCP server are allowed to update the same zone? Can this create the potential for stale DNS records being left in a zone? If so, what can be done about it?
A.
The recommendation is not to allow Windows clients to update their zones. Instead, the DHCP server should manage all the client dynamic RR records. When configured to perform DNS updates, the DHCP server accurately manages all the RRs associated with the clients that it served leases to. In contrast, Windows client machines blindly send a daily DNS update to the server, and when removed from the network, leave a stale DNS entry behind.
Any zone being updated by DNS update clients should have DNS scavenging enabled to shorten the longevity of stale RRs left by transient Windows clients. If the DHCP server and Windows clients are both updating the same zone, three things are required in Cisco Network Registrar:
a.
Enable scavenging for the zone.
b.
Configure the DHCP server to refresh its DNS update entries as each client renews its lease. By default, Cisco Network Registrar does not update the DNS record again between its creation and its final deletion. A DNS update record that Cisco Network Registrar creates lives from the start of the lease until the lease expires. You can change this behavior using a DHCP server (or DNS update configuration) attribute, force-dns-updates. For example:
nrcmd> dhcp enable force-dns-updates
100 Ok
force-dns-updates=true
c.
If scavenging is enabled on a particular zone, then the lease time associated with clients that the DHCP server updates that zone on behalf of must be less than the sum of the no-refresh-interval and refresh-interval scavenging settings. Both of these settings default to seven days. You can set the lease time to 14 days or less if you do not change these default values.
Q.
What needs to be done to integrate a Windows domain with a pre-existing DNS domain naming structure if it was decided not to have overlapping DNS and Windows domains? For example, if there is a pre-existing DNS domain called example.com and a Windows domain is created that is called w2k.example.com, what needs to be done to integrate the Windows domain with the DNS domain?
A.
In the example, a tree in the Windows domain forest would have a root of w2k.example.com. There would be a DNS domain named example.com. This DNS domain would be represented by a zone named example.com. There may be additional DNS subdomains represented in this zone, but no subdomains are ever delegated out of this zone into their own zones. All the subdomains will always reside in the example.com. zone.
Q.
In this case, how are DNS updates from the domain controllers dealt with?
A.
To deal with the SRV record updates from the Windows domain controllers, limit DNS updates to the example.com. zone to the domain controllers by IP address only. (Later, you will also add the IP address of the DHCP server to the list.) Enable scavenging on the zone. The controllers will update SRV and A records for the w2k.example.com subdomain in the example.com zone. There is no special configuration required to deal with the A record update from each domain controller, because an A record for w2k.example.com does not conflict with the SOA, NS, or any other static record in the example.com zone.
The example.com zone then might include these records:
example.com. 43200 SOA ns.example.com. hostmaster.example.com. (
98011312 ;serial
3600 ;refresh
3600 ;retry
3600000 ;expire
43200 ) ;minimum
example.com.86400 NS ns.example.com
ns.example.com. 86400 A 10.0.0.10
_ldap._tcp.w2k.example.com. IN SRV 0 0 389 dc1.w2k.example.com
w2k.example.com 86400 A 10.0.0.25
...
Q.
In this case, how are zone updates from individual Windows client machines dealt with?
A.
In this scenario, the clients could potentially try to update the example.com. zone with updates to the w2k.example.com domain. any other device on the network. Security by IP is not the most ideal solution, as it would not prevent a malicious attack from a spoofed IP address source. You can secure updates from the DHCP server by configuring TSIG between the DHCP server and the DNS server.
Q.
Is scavenging required in this case?
A.
No. Updates are only accepted from the domain controllers and the DHCP server. The DHCP server accurately maintains the life cycle of the records that they add and do not require scavenging. You can manage the domain controller dynamic entries manually by using the Cisco Network Registrar single-record dynamic RR removal feature.
Q.
What needs to be done to integrate a Windows domain that shares its namespace with a DNS domain? For example, if there is a pre-existing DNS zone called example.com and a Windows Active Directory domain called example.com needs to be deployed, how can it be done?
A.
In this example, a tree in the Windows domain forest would have a root of example.com. There is a pre-existing domain that is also named example.com that is represented by a zone named example.com.
Q.
In this case, how are DNS updates from individual Windows client machines dealt with?
A.
To deal with the SRV record updates, create subzones for:
_tcp.example.com.
_sites.example.com.
_msdcs.example.com.
_msdcs.example.com.
_udp.example.com.
Limit DNS updates to those zones to the domain controllers by IP address only. Enable scavenging on these zones.
To deal with the A record update from each domain controller, enable a DNS server attribute, simulate-zone-top-dynupdate.
nrcmd> dns enable simulate-zone-top-dynupdate
It is not required, but if desired, manually add an A record for the domain controllers to the example.com zone.
Q.
In this case, how are zone updates from individual Windows client machines dealt with?
A.
In this scenario, the clients could potentially try to update the example.com zone. other devices on the network. Security by IP is not the most ideal solution, as it would not prevent a malicious attack from a spoofed source. Updates from the DHCP server are more secure when TSIG is configured between the DHCP server and the DNS server.
Q.
Has scavenging been addressed in this case?
A.
Yes. The subzones _tcp.example.com, _sites.example.com, _msdcs.example.com, _msdcs.example.com, and _udp.example.com zones accept updates only from the domain controllers, and scavenging was turned on for these zones. The example.com zone accepts DNS updates only from the DHCP server. | http://www.cisco.com/c/en/us/td/docs/net_mgmt/network_registrar/7-2/user/guide/cnr72book/UG27_DDN.html | CC-MAIN-2016-18 | refinedweb | 9,774 | 54.93 |
The = 77617 and y = 33096.
Here we evaluate Rump’s example in single, double, and quadruple precision.
#include <iostream> #include <quadmath.h> using namespace std; template <typename T> T f(T x, T y) { T x2 = x*x; T y2 = y*y; T y4 = y2*y2; T y6 = y2*y4; T y8 = y2*y6; return 333.75*y6 + x2*(11*x2*y2 - y6 - 121*y4 - 2) + 5.5*y8 + x/(2*y); } int main() { int x = 77617; int y = 33096; cout << f<float>(x, y) << endl; cout << f<double>(x, y) << endl; cout << (double) f<__float128>(x, y) << endl; }
This gives three answers,
-1.85901e+030 -1.18059e+021 1.1726
none of which is remotely accurate. The exact answer is -54767/66192 = -0.827…
Python gives the same result as C++ with double precision, which isn’t surprising since Python floating point numbers are C doubles under the hood.
Where does the problem come from? The usual suspect: subtracting nearly equal numbers. Let’s split Rump’s expression into two parts.
s = 333.75 y6 + x2(11x2y2 – y6 – 121y4 – 2)
t = 5.5y8 + x/(2y)
and look at them separately. We get
s = -7917111340668961361101134701524942850.00 t = 7917111340668961361101134701524942849.1726...
The values –s and t agree to 36 figures, but quadruple precision has only 34 figures [1]. Subtracting these two numbers results in catastrophic loss of precision.
Rump went to some effort to conceal what’s going on, and the example is contrived to require just a little more than quad precision. However, his example illustrates things that come up in practice. For one, the values of x and y are not that big, and so we could be mislead into thinking the terms in the polynomial are not that big. For another, it’s not clear that you’re subtracting nearly equal numbers. I’ve been caught off guard by both of these in real applications.
Floating point numbers are a leaky abstraction. They can trip you up if you don’t know what you’re doing. But the same can be said of everything in computing. We can often ignore finite precision just as we can often ignore finite memory, for example. The proper attitude toward floating point numbers is somewhere between blissful ignorance and unnecessary fear.
Similar posts
[1] A quad precision number has 128 bits: 1 sign bit, 15 for exponents, and 112 for the fractional part. Because the leading zero is implicit, this gives a quad 113 bits of precision. Since log10(2113) = 34.01…, this means a quad has 34 decimals of precision.
5 thoughts on “Just evaluating a polynomial: how hard could it be?”
I love these posts. Keep them coming.
Hmm. That expression is simple enough that, if you had an arbitrary precision integer library*, you could calculate the result exactly. (You’d need to multiply by 100 to make the coefficients integers and do something sensible with the x/2y term.)
Of course, that lacks generality something fierce, but generality wasn’t the question. And the intermediate results wouldn’t be all that big, 30 digits or so.
*: Memory has it that certain premodern LISP implementations (e.g. MACLISP) did have infinite precision integer arithmetic, so I’d think that one of the zillions of modern languages would too…
Hi John,
There is a tool called Herbie that can be used to detect when things may go awry with floating point expressions. Have you try to see if some of these examples with it?
There’s an easy way to do this in Smalltalk : just use fractions to keep the necessary precision and end up with the exact result! My blog post: | https://www.johndcook.com/blog/2019/11/12/rump-floating-point/ | CC-MAIN-2019-51 | refinedweb | 610 | 75.61 |
Many applications need to be able to specify some kind of recurring processes that are run in the background to handle batch processing, execute long running operations, or handle cleanup routines. In SharePoint 2010, these operations are written as SharePoint Timer Jobs and in the following article, I will cover all of the ins and outs of writing one.
What is a Timer Job?
Timer Jobs are recurring background processes that are managed by SharePoint. If you navigate to the Central Administration site, click on the Monitoring link from the main page, and then choose the Review job definitions link under the Timer Jobs section, then you’ll see a list of scheduled timer job instances. Notice that I did not say a list of timer jobs, but rather a list of scheduled timer job instances. Unfortunately, the term ‘Timer Job’ in SharePoint is a bit too general. A timer job really consists of three parts: Timer Job Definitions, Timer Job Instances, and Timer Job Schedules.
A timer job definition is a .NET class that inherits from the SPJobDefinition class and defines the execution logic of the Timer Job. Since it is just a .NET class, the Timer Job Definition has all of the things that you would expect a class to have: properties, methods, constructors, etc.
A timer job instance, as you may have guessed, is an object instance of the .NET Timer Job Definition class. You can have multiple instances of a timer job definition, which allows you to define a Timer Job Definition once, but vary the way it operates by specifying different property values for each Timer Job Instance. Whether you need one or many instances of a Timer Job Definition depends entirely on what you are trying to accomplish.
A timer job schedule is the last part of the puzzle. SharePoint exposes a series of pre-defined scheduling classes that derive from the SPSchedule class. A timer job instance must be associated with one of these schedules in order to run, even if you only want to run the timer job instance once.
Timer Jobs in the Central Administration UI
As seems standard with a number of constructs in SharePoint, Microsoft has made it a bit confusing for developers by using the term “Job Definition” in the SharePoint user interface that means something a bit different than what you would expect if you work them in code.
In Central Administration, you can review all of the “Job Definitions” by clicking on the Monitoring link from the main screen (1); then under the Timer Jobs heading clicking the Review Job Definitions (2) link. Since there is a SPJobDefinition class, you may be led to believe that this screen shows all of the timer job definitions that are available, but that is not the case. SharePoint does not have a mechanism to register a job definition by itself, so it does not have a list of the available job definitions. SharePoint only maintains a list of timer job instances that have been scheduled (i.e. SPJobDefinition instances with an associated SPSchedule). So the Review Job Definitions page is really a list of timer job instances that have been scheduled.
Adding to the confusion, there is also a link from the Review Job Definitions page to Scheduled Jobs. Since a timer job instance needs to be scheduled before it can be used, you would think this page contains a list of timer job instances that have been scheduled. Technically-speaking, it does, but it’s basically the same list that you find on the Review Job Definitions page with a different view. The main difference is that this page displays, and is sorted by, the Next Start Time (which informs you when the job will run next) instead of by Title. Timer jobs with schedules that have been disabled will also not appear on this list, so it may have a fewer number of items listed than the Review Job Definitions page. Clicking on the Title from either page takes you to the same timer job instance configuration page that allows you to modify the schedule of the timer job instance.
Timer Job Associations
A timer job instance must be associated with either a SharePoint web application (SPWebApplication) or a SharePoint service application (SPService). It may also be associated with a server (SPServer) if you desire, but it is not a requirement. The reason behind the association requirements is that the SPJobDefinition class inherits from the SPPersistedObject class.
Without getting into many of the technical details, SharePoint automatically persists state information about SharePoint objects in the farm using a tree-like structure of SPPersistedObject instances. The root of this tree is the SharePoint Farm itself (an SPFarm object) and includes the various web applications, services, and a myriad of other SharePoint objects that reside in the farm. All SPPersistedObject instances must have a parent in order to reside in the tree, and Microsoft deemed the SPWebApplication and SPService objects appropriate places for timer job instances to live in that hierarchy.
What do Timer Job Associations Mean for a Developer?
What this means for you, the developer, is that there are two main constructors for the SPJobDefinition class. One of the constructors allows you to associate the timer job with a web application, and the other allows you to associate it with a service application. You will need to determine which one is best suited for your situation, a chore that should be relatively simple. If you are developing a service application, then it should be associated with that service application. Otherwise, it should be associated with a web application.
One question that may quickly arise is which web application should I associate my timer job with if my timer job really isn’t targeting a specific web application? I recommend associating it with the Central Administration web application, which will be demonstrated later on in this article.
You also have the option of associating a timer job with a specific server in the farm (SPServer). By doing so, it means that your timer job will only run on that one server. A server can be specified for either one of the constructors. In case you were curious, server association has absolutely nothing to do with the SPPersistedObject hierarchy.
How Do I Associate My Timer Job with the Central Admin Web Application?
You can get a reference to the web application associated with the Central Admin site through the SPWebAdministration.Local property. Just pass this as the web application to the web application constructor and your timer job will be associated with the Central Admin Web Application.
Can I Associate My Timer Job with a Server and Skip the Other Entities?
No. It has to be associated with a web application or service application because it must have a parent in the SPPersistedObject hierarchy for the farm. As mentioned before, the server associated with a timer job instance has nothing to do with that persistence hierarchy.
Can I Just Pass A Null Association Into the Constructor?
Nice try, but no. If you attempt to get around the association by passing null into the constructor for either the web application or service application, it will result in a null object reference exception when that constructor is called in your code.
Using Associated Entities in Code
There are three properties on the SPJobDefinition that are important when it comes to the SharePoint entities associated with a timer job: WebApplication, Service, and Server. As you would hopefully expect, associating a timer job with any of these items results in these properties being populated with a reference to the associated item. Timer jobs don’t necessarily need to use these references, but if you do happen to need them they are available.
How Do I Write a Timer Job Definition?
Writing a timer job definition is extremely easy. All you have to do is create a class that inherits from SPJobDefinition, implement a few constructors, and override the Execute method.
Required Assemblies
You will need to add a reference to the Microsoft.SharePoint.dll if you are starting from a blank project. If you created a SharePoint 2010 project, you should not need to manually add any references to your project to write a timer job. If you need to reference an assembly manually, most of the SharePoint assemblies are located in
C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI
Recommended using Statements
Both the SPJobDefinition and SPSchedule classes reside in the Microsoft.SharePoint.Administration namespace. The execute method also accepts a GUID parameter, which lives in the System namespace. As such, you will need (at least) the following two using statements if you don’t want to fully qualify everything in your code:
Inherit from SPJobDefinition
All SharePoint timer job definitions ultimately inherit from SPJobDefinition, so our class will need to do the same. You can, of course, inherit from a different class as long as SPJobDefinition is somewhere in the inheritance chain.
Write the Timer Job Definition’s ConstructorsThe SPJobDefinition class exposes three constructors: the default constructor, a web application association constructor, and a service application association constructor. You have two requirements when writing the constructor(s) for your timer job definition:
- Your timer job must implement a default (parameter-less) constructor.
- Your timer job must call either the web service association constructor or the service application association constructor from the SPJobDefinition class in one of its constructors.
You are required to implement a default (parameter-less) constructor for your timer job for deserialization purposes. The SPJobDefinition class inherits directly from the SPPersistedObject class, which means that SharePoint automatically stores the state of your timer job to a permanent store without you having to implement any of that storage logic. However, the deserialization mechanism expects a default parameter-less constructor to be present in order to operate correctly. Failure to implement the default constructor will result in the following message when you call .Update() from your timer job class:
<TimerJobClass> cannot be deserialized because it does not have a public default constructor.
You are also required to call either the web service association constructor or the service application association constructor from one of your timer job definition’s constructors. Remember, your timer job must be associated with one of these two entities. The only time that an association can be created is from the constructors on the SPJobDefinition class, so one of your constructors has to be associated with a call down to the appropriate base constructor.
There are four key pieces of information for which your timer job definition needs to account in the constructor:
- Web Application or Web Service Association Reference
- Server Association Reference (optional)
- Name of the timer job
- Lock Type of the Job
Please understand that the two constructor requirements listed previously are not mutually exclusive – in other words, you are not required to have 2 constructors. If you have static values for these four key pieces of information, then you can implement a single default constructor that calls the appropriate base constructor with your static values, as in the following example:
Notice that the example above satisfies both requirements: the MyTimerJob class exposes a default constructor, that default constructor is calling down to the web application association constructor of the base class, and all four key pieces of information have been provided.
Many timer jobs will, however, require information to be passed in via a constructor. If this is the case, then you will need to implement two constructors: the default (parameter-less) constructor, and the constructor with the parameters that need to be passed.
If you are implementing a default constructor simply for the sake of having it for deserialization purposes, then you will find the following constructor helpful:
Notice that you do not need to worry about passing any values into the base default constructor. Remember, SharePoint uses this constructor for deserialization, so all of the properties required by your timer job will be populated back into the timer job by SharePoint during that deserialization process.
Naming a Timer Job
One of the four key pieces of information that you must provide a timer job is the timer job name. Timer jobs have two properties related to naming: Name and DisplayName. The Name property is used to identify a timer job, so it must be unique for the timer job instances that exist under the parent SPService or SPWebApplication with which the timer job is associated. In other words, you can have two timer jobs with the same name, as long as they exist under different parents.
DisplayName should contain the name of the timer job as it will appear in Central Administration. If you do not provide an explicit DisplayName value, the value will default to the value in the Name property. Since this name is only used as a display value, it does not have to be unique. You should be aware, however, that the timer job instance lists in Central Administration do not display in a hierarchy – they appear as a flat list. As such, you should take care to distinguish the timer job DisplayName in some way for the sake of users.
For example, let’s say you have a timer job definition that cleans up files in a web application. You’ve got two timer job instances created, one of which is associated with Web Application A and one of which is Associated with Web Application B. Since the timer job instances reside under different web applications, they can both have the same Name. If you do not give them different display names, this is what users will see in the timer job instance list in Central Administration:
- Timer Job
- Timer Job
This can be a bit confusing because it looks like the same timer job is defined twice. By simply varying the DisplayName based on the associated web application’s Title, ID, or URL, you can clear up the confusion and display something far more meaningful like:
- Timer Job – Web App A
- Timer Job – Web App B
Later on we’ll discuss how to store custom properties in a timer job instance. Be aware that you can also use these custom properties to set both the Name and the DisplayName values to ensure that the name is unique under a given parent and that the user has some way to distinguish between timer jobs in the timer job instance list.
Specifying a Timer Job Lock Type
Another one of the four key pieces of information that you must provide the timer job instance constructor is the SPJobLockType value which helps dictate where and how many times the timer job runs. There are three options for this value:
If you are wondering exactly where a timer job will run, know that lock types play a major role in defining that location. We will cover in more detail how to determine exactly which machine a timer job will run on later on in this article.
Override the Execute Method
The main reason you write a timer job is to run code on a periodic basis. All of the code you want to run when your timer job executes should be located in the conveniently named Execute method. Simply override the method and put in whatever code you want to run:
As mentioned before, if your timer job has a lock type value of ContentDatabase, then the targetInstanceId parameter is populated with the ID of the content database for which the timer job is being run. Otherwise, you can disregard this parameter. There is really no limit to what you can do inside of this method, so your timer job process can be as simple or as complicated as you like.
Writing the MyTimerJob Demo Timer Job Definition
In an effort to demonstrate what we’ve discussed, we’ll go ahead and build out a very simple timer job definition that writes out a single line of text containing the current date/time to a file each time it runs. Of course, you can implement a lot more complex timer jobs, but for the sake of demonstration I do not want to have an overly complex scenario that takes a lot of time to setup. Plus, it should make it easy to see that your timer job is actually working.
Following is the code for the MyTimerJob class:
The first thing to notice about this class is that it derives from SPJobDefinition class. As mentioned before, all timer job definitions ultimately derive from this class.
Next, we have two constructors. The first constructor is a default, parameter-less constructor that is required for serialization purposes. The second constructor mimics the base constructor from SPJobDefinition that associates a timer job with a SharePoint web application.
After the constructors, you will see the overridden Execute method. This is the method that contains any code that you want executed during the timer job. In our case, the code builds out a directory and file name, ensures the directory exists, and writes the current date/time to the file.
That’s all there is to it. Making a timer job definition is a pretty simple and straightforward process.
Scheduling a Timer Job Instance
Once you have a job definition, all that is left to do is create an instance of that definition, set any applicable properties on the instance, and associate instance with a schedule that defines how often the timer job runs. To do this, you have to create an appropriate SPSchedule object and assign it to the Schedule property on your timer job instance.
Since the SPSchedule class is an abstract base class that defines common functionality required for scheduling a timer job, you will not be creating instances of an SPSchedule object directly. SharePoint ships with a number of classes that derive from the SPSchedule class that offer a variety of scheduling options. Most of these are fairly self-explanatory, but here is the list none-the-less:
Scheduling a Job to Run Each Day
Following is an example demonstrating how to schedule the MyTimerJob timer job to run once per day between the hours of 11:05 a.m. and 2:15 p.m.
First, we create an instance of the timer job class. In this case we’re calling the timer job instance “My Timer Job (Hourly)”, associating it with the Central Administration web application, not associating it with any particular server in the farm, and specifying the lock type value as Job.
Next, we assign a new SPDailySchdedule instance to the Schedule property on the timer job instance. We’re using the object initialization syntax on the SPDailySchdedule constructor to define the window in which the timer job may run – the 11:05 start time is defined using the BeginHour and BeginMinute properties, and the 2:15 end time is defined using the EndHour and EndMinute properties. You can also get really specific and define the start and end time windows down to the second using the BeginMinute and EndMinute properties if you so desire.
Finally, we call the Update method on the timer job instance to save the timer job instance and starts it running on the schedule you have defined. If you fail to call update, then your timer job will not run and it will not show up in any of the timer job lists in Central Administration. If you fail to associate your timer job instance with a schedule, your timer job will not run and will not show up in the timer job lists in Central Administration (even if you do call the Update method).
What’s with the Begin an End Properties on Schedules?
All of the built-in SPSchedule classes allow you to define a window in which the timer job may run. The date/time when that window begins is defined by the properties prefixed with Begin like BeginDay, BeginHour, BeginMinute, etc. The end of the window is defined by the properties prefixed with End like EndDay, EndHour, and EndMinute.
An SPSchedule object communicates the next date and time when the timer job instance is supposed to run via the NextOccurrence method. For all of the built-in SharePoint SPSchedule objects, this method returns a randomized value that occurs somewhere in the window that has been defined. This is extremely beneficial for processor-intensive jobs that run on all servers in the farm because each machine in the farm calls the NextOccurrence method and receives a different start time than the other servers in the farm. Thus, the start-time for each server will be staggered to avoid slamming all of the servers in the farm with a processor-intensive timer job at the same time, allowing your farm to continuing processing requests.
Can SharePoint “Miss” a Timer Job if the Scheduled Window is Too Small?
No. Let’s say that you define a timer job instance with a schedule that has a two second window that starts at 10:00:01 and ends at 10:00:02. SharePoint uses the start and end times to calculate a random value that falls between those values. As such, the timer job will be scheduled to run either at 10:00:01 or 10:00:02 (because those are the only two possible values in this scenario). Although it will be random, it is a concrete value that the SharePoint timer service is aware of and will use to ensure that all timer jobs are run.
For example, let’s say that at 10:00:00 the SharePoint Timer Job Service “checks” to see if it should be running any jobs. Since it is not yet time to run your timer job instance, the job will not be started. Let’s say the next time the SharePoint Timer Service checks to see if it should be running a job is at 10:00:05, effectively missing the window when the timer job can start. Some people mistakenly believe that since the window was missed, SharePoint will simply not run the job. Rest assured, that is not the case. SharePoint has a concrete time that that timer job instance was supposed to start, and if the current time is past that start time then SharePoint is going to start that timer job.
Can I Make a Custom Schedule Class?
Writing a custom SPSchedule is outside of the scope of this article, but it is certainly possible. The question is how useful is it? Writing a custom schedule requires you to create a class that inherits from the SPSchedule class and overrides the NextOccurrence and ToString methods. The ToString method must return a valid recurrence string that defines how often the job recurs, and the syntax for this string has predefined recurrence intervals (e.g. you can’t create your own). As such, it appears that you’re stuck with the intervals that have already been defined, and those are effectively exposed via the SPSchedule objects outlined in the table above.
You can find more information about recurrence values from Mark Arend’s blog post on the subject.
Note: I have not done extensive research in this area, and I am not completely familiar with how the recurrence value from the ToString method and the DateTime returned from the NextOccurrence depend on one another. You may be able to “trick” SharePoint by having a valid recurrence value but returning a NextOccurrence value that matches the schedule you actually want to follow.
How Do I Update the Progress Bar for a Timer Job?
Timer jobs have the ability to communicate their overall progress to an end user, which is especially useful for long running processes. Seeing the progress bar update lets users know that the timer job is working and hasn’t hung for some reason. It also gives them a feeling for how long the timer job is going to run if they are waiting for it to complete.
Updating the progress for a timer job is extremely simple: you just call the UpdateProgress method and pass in a value from 0 to 100 to indicate the completion percentage of your timer job. The hardest part is probably figuring out the integer math to come up with the percentage:
One mistake that people often make is calculating the percentage like you were taught in school – by calculating the number of items processed by the total and then multiplying by 100. Unfortunately, this always results in a value of 0 because a decimal value is truncated in integer math. So you need to multiply the dividend by 100 first so your value will have meaning after the decimal portion is truncated.
How Do You Pass Parameters to a Timer Job?
You have two approaches for passing information to a timer job. One option is to have your timer job read from a defined location, like a SharePoint list or SQL database. To pass information to your list, you just write information to that designated location and your timer job will have access to it. This approach works well when you only plan on having one instance of your timer job definition.
You also have the option of storing key/value pairs along with your timer job instance using the Properties Hashtable on the SPJobDefinition. The only requirement is that any values you place in the Hashtable must be serializable because they will be persisted to the SPPersistedObject hierarchy with your timer job instance. Since the properties are serialized with each timer job instance, this approach makes a lot of sense when you plan on having multiple instances of a timer job definition.
For an example of how to use the Properties Hashtable, see the next section.
Are Class-Level Timer Job Properties Stored Automatically?
No. Although a timer job instance is serialized automatically into the SPPersistedObject hierarchy, the public properties on the timer job are not automatically included in this process. If you want to expose a strongly-typed property that is serialized, back the property with the Properties Hashtable as demonstrated in the following example:
Deleting a Timer Job
Deleting a timer job is really simple. All you have to do is call the Delete method on the timer job instance. The real challenge is getting a reference to your timer job instance so you can call that Delete method. Both the SPWebApplication and SPService classes expose a property named JobDefinitions that contains a SPJobDefinitionCollection containing a collection of the timer job instances associated with the entity. Unfortunately, SPJobDefinitionCollection does not expose any helpful methods for locating your timer job instance, so you’ll have to manually iterate through the collection and check each item to see if it’s the one you want.
Following is a helpful extension method named DeleteJob that demonstrates how to look through the collection and find your timer job instance:
If you include this extension method, then you can call DeleteJob directly from the JobDefinitions property on both the SPWebApplication and SPService classes:
Which Server Does the Timer Job Run On?
One frequently asked question about timer jobs is which server do they run on? The answer depends on the following factors:
- Which SPJobLockType is associated with the timer job instance?
- Which server is the code that creates the timer job instance running on?
- Is the timer job instance associated with a specific server?
- Is the parent web application or service application associated with the timer job instance provisioned to the server?
Remember that timer jobs are associated with either a web application or a service application. In order for a server to be eligible to run a timer job instance, the server must have the associated web application or service application for the timer job provisioned to the server.
When the SPJobLockType of the timer job instance is set to Job or ContentDatabase, then the timer job is executed on a single server. By default, the server that called the code to create the timer job instance will also be the server that actually runs the timer job. However, if the server is ineligible to run the timer job because it does not have the associated web or service application provisioned, then the timer job will be run by the first server in the farm that does have the appropriate associated entity provisioned.
When the SPJobLockType of the timer job instance is set to None, then the timer job is executed on all of the servers in the farm on which the timer job instance’s associated web or service application has been provisioned.
When a specific server has been associated with a timer job instance, the timer job will only run on the specified server, even if the SPJobLockType is None. Furthermore, if the server is ineligible to run the timer job because it does not have the appropriate web or service application provisioned, then the job will simply not run.
What Account Does the Timer Job Run Under?
Timer jobs are executed by the SharePoint 2010 Timer Service (OWSTIMER.EXE), which you can find in the Services list in under the Administrative Tools item in the Windows Control Panel of your server. To determine the account the SharePoint 2010 Timer Service runs under, just look at the account located in the Log On As column:
By default, the Farm account is associated with the SharePoint 2010 Timer Job. However, there is nothing stopping an administrator from changing the service account through the Windows Services interface. If you are experiencing security or access issues, make sure to manually check the account on each server in the farm to ensure they are all using the same account, and that it is the account you expected.
Conclusion
Timer Jobs are just one small part of the beast that is SharePoint, but I hope this has left you with a good understanding of what timer jobs are, how they work, and how to write them!
How does your SharePoint application perform? The Beta release of ANTS Performance Profiler 8 adds support for SharePoint 2013, helping rapidly identify and understand SharePoint performance issues. Try the Beta. | https://www.red-gate.com/simple-talk/dotnet/.net-tools/a-complete-guide-to-writing-timer-jobs-in-sharepoint-2010/ | CC-MAIN-2018-05 | refinedweb | 5,039 | 56.69 |
Red Hat Bugzilla – Bug 483262
Valgrind doesn't work with GDB properly
Last modified: 2010-09-13 23:36:08 EDT
+++ This bug was initially created as a clone of Bug #453688 +++
Description of problem:
Attempting to use valgrind with --db-attach=yes on a program that triggers an
error, and then attaching gdb causes gdb to error, and is unusable. Once gdb
exits, valgrind resumes control, and the program (seems to) continue normally.
Version-Release number of selected component (if applicable):
broken: RHEL-4.6 gdb-6.3.0.0-1.153.el4.x86_64 (by valgrind-3.1.1-1.EL4.x86_64)
OK: RHEL-4.5 gdb-6.3.0.0-1.143.el4.x86_64 (by valgrind-3.1.1-1.EL4.x86_64)
How reproducible:
Every time
Steps to Reproduce:
1. Create a program that causes valgrind to detect an error, such as an invalid
read or write. The following example is sufficient
#include <stdlib.h>
int
main() {
int* arr = (int*)malloc(8);
int j = arr[30];
return 0;
}
2. Compile the example program (gcc -g3 -Wall val-db.c -o val-db)
3. Run valgrind with the following options "valgrind --db-attach=yes ./val-db"
Actual results:
GDB freaks out, and cannot debug. The following occurs on-screen:
[bash]root@x86-64-4as-6-m1.lab.bos.redhat.com:/root/jkratoch/redhat# valgrind --db-attach=yes ./val-db
==25917== Memcheck, a memory error detector.
==25917== Copyright (C) 2002-2005, and GNU GPL'd, by Julian Seward et al.
==25917== Using LibVEX rev 1575, a library for dynamic binary translation.
==25917== Copyright (C) 2004-2005, and GNU GPL'd, by OpenWorks LLP.
==25917== Using valgrind-3.1.1, a dynamic binary instrumentation framework.
==25917== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward et al.
==25917== For more details, rerun with: -v
==25917==
==25917== Invalid read of size 4
==25917== at 0x4004C6: main (val-db.c:6)
==25917== Address 0x4A3C0A8 is not stack'd, malloc'd or (recently) free'd
==25917==
==25917== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- y
==25917== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- starting debugger
==25917== starting debugger with cmd: /usr/bin/gdb -nw /proc/25918/fd/1014 25918".
Attaching to program: /proc/25918/fd/1014, process 25918
Redelivering pending Segmentation fault.
valgrind: m_signals.c:1001 (default_action): Assertion 'VG_(is_running_thread)(tid)' failed.
==25918== at 0x70010E08: report_and_quit (m_libcassert.c:122)
sched status:
running_tid=1
Thread 1: status = VgTs_Runnable
==25918== at 0x4004C6: main (val-db.c.
Program process 0 exited: 1 (exited)
/root/jkratoch/redhat/25918: No such file or directory.
Expected results:
GDB attaches properly, and developer is able to debug the process.
Additional info:
Regression by the Bug 233746:
* Mon Jul 30 2007 Jan Kratochvil <jan.kratochvil@redhat.com> - 6.3.0.0-1.151
- Never lose any pending signal while attaching - resubmit them (BZ 233746).
This bugzilla has Keywords: Regression.
Since no regressions are allowed between releases,
it is also being proposed as a blocker for this release.
Please resolve ASAP. | https://bugzilla.redhat.com/show_bug.cgi?id=483262 | CC-MAIN-2018-17 | refinedweb | 506 | 61.22 |
I like static web-sites, always have. More than that I like blogging sites with lots of capability. Mostly I've rolled my own there and have done for some years now.
Best of all worlds for me would be a package that can produce a static blog site but with lots of capability, provided that same package can also easily produce without contortions web sites that aren't "blogs".
Get Nikola. It does all of the above and is under active development.
I've long seen Nikola mentioned in Roberto Alsina's posts that make their way through to Planet Python but only very recently have I taken the time to install it to a virtual environment and have a go. I like it so much I'm converting a bunch of static sites I host for various non-profit organizations and took on the task of developing a new site for another non-profit because I knew I had a useful tool in my back pocket.
Thanks Roberto and the greater Nikola universe for making my day (evening) a little easier.
View from the deck of the cafe at Living Forest Camp Ground.
In conjunction with BC Family French Camp held each summer (Nanaimo, Salmon Arm, Guillam Lake), we've been coming here for years. French camp or not, it's well worth a visit!
I've not written anything relating to Python for ages but today a two-fer with a post announcing (since the authors aren't publicity hounds) an update to DurusWorks, and this post on string templating in Python which came about only because Andriy Kornatskyy, the author of Wheezy.template, happened to update his results table.
Since I needed to test the new DurusWorks and the QPY templating package out anyway, I cooked up two entries for the bigtable.py benchmark. The first example utilizes QPY's smart-string escaping found in xml() and join_xml(), and, happily, this code looks almost identical (for a reason) to the standard lib inspired list_append "template" function found in bigtable.py:
from qpy import xml, join_xml def test_qpy_list_append(): b = [] w = b.append table = ctx['table'] w(xml('<table>\n')) for row in table: w(xml('<tr>\n')) for key, value in row.items(): w(xml('<td>')) w(key) w(xml('</td><td>')) w(value) w(xml('</td>\n')) w(xml('</tr>\n')) w(xml('</table>')) return join_xml(b)
Look at the results table at the end of this post and you'll see QPY is pretty fast compared to plain-ol-string operations given smart XML escaping is going on too. QPY's xml type is a subclass of Python's string class; QPY provides a C extension module for performance.
So that was fast, but coding web applications that way gets ugly even faster. Fortunately for no pain and all the gain, you can rewrite the above as:
def test_qpy_template:xml (): table = ctx['table'] '<table>\n' for row in table: '<tr>\n' for key, value in row.items(): '<td>' key '</td><td>' value '</td>\n' '</tr>\n' '</table>\n'
And even better, string substitutions don't get in the way of performance or smart escaping of untrusted input:
def test_qpy_template_sub:xml (): table = ctx['table'] '<table>\n' for row in table: '<tr>\n' for key, value in row.items(): '<td>%s</td><td>%s</td>\n' % (key, value) '</tr>\n' '</table>\n'
Or, if Python 3's .format() method turns your crank more:
def test_qpy_template_fmt:xml (): table = ctx['table'] '<table>\n' for row in table: '<tr>\n' for key, value in row.items(): '<td>{}</td><td>{}</td>\n'.format(key, value) '</tr>\n' '</table>\n'
Now we're talking. With these last two examples it becomes clearer that QPY templates turn traditional templating upside down in that QPY offers a sane mechanism to have content-in-code rather than code-in-content which is the common approach to HTML templating.
QPY's style will be an acquired taste for some, and is definitely not suitable for non-programmers.
Comparative results table:
Linux Mint Debian Edition, Virtual Machine on a high end Windows box 2 Cores configured, 4GB RAM (wz) % python3 bigtable.py msec rps tcalls funcs chameleon 35.61 28.08 182033 25 cheetah not installed django 504.84 1.98 1503067 51 jinja2 28.18 35.48 60020 27 list_append 33.07 30.24 103008 12 list_extend 32.96 30.34 63008 12 mako 24.24 41.26 93036 37 qpy_list_append 17.23 58.02 53008 9 qpy_template 17.06 58.63 53008 9 qpy_template_fmt 24.44 40.92 23008 10 qpy_template_sub 13.98 71.55 13008 9 tenjin 21.40 46.73 123012 16 tornado 76.35 13.10 353023 23 web2py not installed wheezy_template 34.15 29.28 103010 14
In general the relative performance between tools looks to match Andriy's own results except for his own tool wheezy.template, and I can't account for why that would be so, but no doubt Andriy will sort things out soon.
The truly good news is there are a number of well-done, fast-enough templating solutions for Python web application developers to lean on so pick the one(s) that fit your head or project best and move on!
DurusWorks 1.2, born of QP, born of the venerable Quixote web framework, has been released. DW includes the Python object database Durus, a ZODB work-a-like with less complexity. Durus, like ZODB, has uses that go far beyond web development.
Dulcinea, a DW/QP centric package of objects, UI, and helpers, has also been updated.
QP, Durus, and DurusWorks have supported Python 2.x and Python 3.x from the same code base for years and can be found here:
For those who don't know who Don McRae is, he is the current in a long line of BC Education Ministers. The BC Liberal government has a habit of chewing through Education ministers which probably tells you they don't care much for the file.
You'd think Don's experience as a teacher would be useful in his ministerial role but since his appointment it has become quite clear that he can't, or isn't allowed to, think for himself. Instead all he spouts is party doctrine.
The video is a dramatization of an actual letter he wrote when he was a teacher, not a politician, to then education Shirley Bond, one of BC's worst-ever education ministers although Christy Clark and Margaret MacDiarmid were both awful too.
In the video teacher-McRae laments over class size and composition. As minister-McRae he tells teachers and parents that these aren't issues at all.
Which version of McRae do you believe?
Christy Clark's B.C. Liberals may well be looking for the exit door soon.
Martyn Brown, former premier Gordon Campbell's chief of staff, doesn't think yesterday's budget holds good news for his party.
"I think the only ones jumping for joy today, really, will be the NDP because, effectively, this government has done the dirty work of saying it needs to increase corporate taxes, it needs to increase personal taxes on higher income earners, it's increasing MSP premiums."
Source: CBC, Liberal insider tears strip off B.C. budget
Thank you, B.C. government, for declaring a new stat holiday. We needed one in February and my family are fortunate enough that we all have a day off work and school and can be together with friends and family.
Bonus: Federal offices are open on B.C.'s family day, making it an ideal time to get your passport renewal taken care of. No lines, no waiting, same cheerful Service Canada folks!
No thank you Premier Christie Clark for granting your approval to the millions of dollars of pro-government pre-writ election-advertising-in-disguise. I don't watch a ton of television but every time I do I'm bombarded with your ads, paid for by our tax dollars.
Such obvious partisan spending should be made illegal.
The above shot was made within a few hundred meters of a terrific view of The Lions - colour me blind but I prefer the image of the birds and fence and prefer it in black and white.
Just a pair of pigeons. Fowl vs Lions:
View of The Lions down the lake at Cleveland Dam, North Vancouver
What do our preferences for images say about us? | http://mikewatkins.ca/feeds/atom | CC-MAIN-2013-48 | refinedweb | 1,412 | 71.44 |
I am trying to initialize an array in one of my header files from an external constant number, but I get the error: size of member 'arraysize' is not constant. Am I going about this wrong, or is there a way to do what I'm trying to do? I want to be able to control all array sizes with constants in the main module. The following below is the sample code.
/* main module*/
#include <iostream>
#include <stdlib.h>
#include "cat.h"
typedef unsigned short int ushort;
const ushort arraysize = 32;
int main()
{
cat boots;
strcpy(boots.name, "Boots");
cout >> boots.name;
return 0;
}
/* End of Main Module */
/* cat.h */
typedef unsigned short int ushort;
const extern ushort arraysize;
//extern const ushort arraysize; //Tried this both ways
class cat
{
ushort age;
char name[arraysize];
};
/* End of cat.h */ | https://cboard.cprogramming.com/cplusplus-programming/31210-const-extern.html | CC-MAIN-2017-09 | refinedweb | 138 | 75.3 |
We are very well aware with the importance of programming languages. They are basis for all sorts of Digital and technological advancements we have made.
Hence, since the dawn of the information age, programmers are looking for languages that can provide them variety of features. Guido von Rossum, the creator of the Python, was such programmer. He wanted to have a language that can provide all the features he wanted, for instance Scripting language, High level data type, etc.
If you would like to become a Python certified professional, then visit Mindmajix - A Global online training platform: “Python Training” Course. This course will help you to achieve excellence in this domain.
Exception handling in python with examples:
Python is a language that is good for projects that require quick developments. It is general purpose high level language and can be used for variety of programming problems.
Python just like other programming languages provides the method of detecting any errors. However, Python requires the exceptions (errors) to be handled at first instance before the successful execution of the program.
It is quite reasonable that practical applications of the programs may contain aspects that cannot be processed effectively by the programming language. Hence, error handling is an important and useful part of programming.
For brushing up our error handling skills, we will take a look at Exception handling. Here, in this article, we will discuss the major components of Python exception handling. To make the process easier we will first take a look at major aspects of exception handling in Python.
Related Article: Why Learn Python
What are exceptions in Python?
Exception in python allows the user to run a program if any problems occur using conditions in except command. Python works as per the instructions of the program. Whenever there are certain aspects which Python does not understand, it shows an exception, which means in the case of Python, the exceptions are errors.
Python ceases the action whenever it encounters any exception. It only completes the execution after the exceptions are handled. Here is an example of exception handling.
Syntax for try except else
Try: Print A. Except exception If A is exception, then execute this block. Else If there is no exception, then execute this block.
This syntax represents, to print A if no exception is present. Otherwise follow the instruction in exception statement. The program specifies certain conditions in case of encountering exceptions.
Instructions:
- More than one exceptions can be used in any program.
- Else-block can be used after the except clause, if the block in try: does not raises any exception.
- Else-block can be used for the code that does not require try: block’s protection.
- Generic clause can be provided to handle any exception.
Error:
>>> while True print('Hello world') File "", line 1 while True print('Hello world') ^ SyntaxError: invalid syntax
Here, invalid syntax represent a missing colon before print command, this program cannot be executed due to syntax error.
Why use Exceptions?
Exception can be helpful in case you know a code which can produce error in the program. You can put that code under exception and run an error free syntax to execute your program. This helps in preventing the error from any block which is capable of producing errors. Since exception can be fatal error, one should use exception syntax to avoid the hurdles.
Python itself contains a number of different exceptions. The reasons can vary as per the exception.
Here, we are listing some of the most common exceptions.
- ImportError: When any object to be imported into the program fails.
- Indexerror: The list being used in the code contains an out of range number.
- NameError: An unknown variable is used in the program. If the program do not contains any defined by the user and the used variable is also not pre defined in the Python, hence this error comes into play.
- SyntaxError: The code cannot be parsed properly. Hence it is necessary to take various precautionary measures while writing the code.
- TypeError: An inappropriate type of function has been used for the value.
- ValueError: A function has been used with correct type however the value for the function is irrelevant.
These errors are some of the most common ones and are required to be used, in case any sort of exceptions are required to be used.
Python also has assertion feature which can be used for raising exceptions. This feature tests an expression, Python tests it. If the result comes up false then it raises an exception. The assert statement is used for assertion. Assert feature improves the productivity of the exceptions in Python.
However for now, we will just focus upon raising exceptions with the use of raise statement.
Related Article: Install Python on Windows and Linux
Catching Exceptions in Python
Critical block of code which is susceptible to raise exception is placed inside the try clause and the code which handles exception is written with in except clause. If we want any particular or specific exception using- “try” clause would be helpful as we can mention several exceptions under one command.
Raising an exception is beneficial. It helps in obstructing the flow of program and returning the exception until it is handled. Raise statement can be used to raise an statement.
Example of a pseudo code:
try: # do something pass except ValueError: # handle ValueError exception pass except (TypeError, ZeroDivisionError): # handle multiple exceptions # TypeError and ZeroDivisionError pass except: # handle all other exceptions pass
Handling an exception (Syntax & example)
To handle an exception in python, try command is used. And there are certain instruction to use it,
- The try clause is executed including a statement between try and except.
- If no exception, the except clause is skipped and try statement is executed.
- If exception occurred, statement under except if matches the exception then clause is executed else try statement is executed
- If an exception occurs which does not match the exception named in the except clause, outer try statements are followed. If no handler is found, it is an unhandled exception and execution stops.
- The block of the code that can cause an exception can be avoided by placing that block in try: block. The block in the question can then be used under except exception.
>>> while True: ... try: ... x = int(input("Please enter a number: ")) ... break ... except ValueError: ... print("Oops! That was no valid number. Try again...") ...
Here is another example:
try: num1 = 7 num2 = 0 print (num1 / num2) print(“Done calculation”) except ZeroDivisonError: print(“Infinity”)
The code signifies another example of handling the exception. Here the exception is ZeroDivisionError.
Output:
Infinity
The except Clause with No Exceptions:
Though it is not considered like a good practice, but yet if it has to be done, the syntax will be
try:
print 'foo'+'qux'+ 7 except: print' There is error' You get the output There is error
This type of coding is not usually preferred by programmers because it is not considered as a good coding practice. The except clause is mainly important in case of exceptions. The program has to be given certain exceptions for the smooth execution of the code. It is the main function of the except clause.
The python program containing except clause with no exception is susceptible to errors which can break the flow of the execution of the program. It is a safe practice to recognize any block susceptible to exceptions and using except clause in the case.
Here is another example
try:
word = “python” print(word / 0) except: print(“An error occurred”)
As you can see in the above code, there are no exceptions whatsoever. Moreover the function described in the above code is incorrect. Hence, as per the instructions, we got the following output.
Output :
An error occured
The except Clause with Multiple Exceptions:
For multiple exceptions, Python interpreter finds a matching exception, then it will execute the code written under except clause.
Except(Exception1, Exception2,…ExceptionN)
For example:
import sys try: d = 8 d = d + '5' except(TypeError, SyntaxError)as e: print sys.exc_info()
We get output as shown
(, TypeError("unsupported operand type(s) for +: 'int' and 'str'",), )
The usage of multiple exceptions is a very handy process for the programmers. Since, the real world execution requires some aspects which can be in deviation with analytical aspect, the exceptions are handy to use.
Python being particularly used for the quick development projects, prevents exceptions while saving the errors at the same time.
Here is another example with multiple exceptions.
try:
variable = 8 print(variable + “hello”) print(variable / 4) except ZeroDivisonError: print(“Divided by zero”) except (ValueError,TypeError): print(“Error occurred”)
Code given above shows examples of multiple exceptions. The types of error in this program have been excepted(Except here means that the invalid operations that we want to execute are skipped and final message is printed that we want to get printed) and result can be seen below. Since the variable 10 cannot be added to the text ‘hello’ it is quite obvious to have the exceptions, in the case.
Output:
>>> Error occurred >>>
The result as expected is “Error occurred”.
The try-finally Clause:
There are certain guidelines using this command, either define an “except” or a “finally” clause with every try block. Also else and finally clauses cannot be used together. Finally clause is particularly useful when the programmer is aware with the outcome of the program.
Generally, it is seen that the else clause is helpful for the programmers with not much expertise in the programming. But when it comes to error correction and termination, this clause is particularly helpful.
For example:
try: foo = open ( 'test.txt', 'w' ) foo.write ( "It's a test file to verify try-finally in exception handling!!") print 'try block executed' finally: foo.close () print 'finally block executed'
OUTPUT:
try block executed finally block executed
Another example can be like
try: f = open("test.txt",encoding = 'utf-8') # perform file operations finally: f.close()
“Finally” is a clause to ensure any code to run without any consequences. Final statement is always to be used at the bottom of ‘try’ or ‘except’ statement. The code within a final statement always runs after the code has been executed, within try po even in ‘except’ blocks.
One more example will make it clearer.
try:
print(“Hello”) print(1/0) except ZeroDivisonError: print(“Divided by zero”) finally: print(“This is going to run anyway”)
As this program makes it clear that the statement “The program is going to run anyway” which is under the “finally” clause is always executed not depending upon the outcome of the code.
Output:
Hello Divided by Zero This is going to run anyway
This output shows exactly the same thing.
Argument of an Exception (With example)
Arguments provide a useful way of analyzing exceptions with the interactive environment of Python. If the programmer is having a number of arguments which are to be used in the program, then the programmer can help in more effective results with ‘except’ clause. Here:
def print_sum_twice(a,b): print(a + b) print(a + b) print_sum_twice(4,5)
In this program, the program has instructed the python print the sum twice. There are two conditions that have been specified in the code. Please note that the arguments are enclosed in the parenthesis.
Output:
>>> 9 9 >>>
Raising exceptions:
By raising a keyword forcefully we can raise an exception, its an alternative and manual method of raising an exception like exceptions are raised by Python itself.
We can also optionally pass in value to the exception to clarify why that exception was raised. Raising an exception or throwing it is a manual process and it quite helpful when the programmer is well aware with the potential exceptions.
Manual exceptions increase the affectivity in the execution of the program. The manual exceptions save Python from encountering any exceptions itself. Hence, smoothing the execution of the program.
If you want to understand the exception specifically, here is an example.
If you are a programmer and you are well aware with the program you are dealing with, has something that the Python might just not be able to understand. In this case you will like flag that block of code because you understand the significance of that code, so that you can exploit the Python’s capability of using it for quick project development and then deal with that block later.
>>>!
User-Defined Exceptions:
Exception classes only function as any regular class. However some simplicity is kept in classes to provide a few attributes improving the efficiency of the exception handling in Python. These attributes can then further be used for extraction by handlers in case of the exceptions.
During the creation of any module it is an optimal process to have a base class for exceptions which are defined particularly by that module. Subclasses can be used for creating error conditions in case of any specific exception.
Python itself offers a variety of exceptions that can be used during the programming by programmers. However, the python also offers also offers full freedom to create any manual exceptions.These exceptions increase the extent of productivity in Python.. Standard modules offer their own exceptions for defining any specific error for functions they define.
This is it what we have for ‘Basics of Exception Handling in Python’. If you have any suggestions that we can add in this article to make it more easier for beginners, please do share with us in the comment box.
Happy Coding!. | https://mindmajix.com/python/exception-handling | CC-MAIN-2021-04 | refinedweb | 2,253 | 54.83 |
Answered by:
unicode newline problem
- I have been using _snwprintf and notepad interpreted my files as ansii instead of unicode which they were supposed to be. I looked at the bits and saw that my \r\n were written as 0x000D(CR, ok) and 0x0D0A(ansii CR+LF!?) which made the mistake. I than tried to manualy insert Unicode LF into my file buffer instead doing so in _snwprint - same result. it seems that the compiler makes this.
As far as I know the corrent UNICODE line ending on windows platform is \r\n, that is 0x000D000A and ansii 0x0D0A
if this is not so, than there is a bug in notepad, worldpad and msvs08 text editor(all of these treat the file as ansii)
I'm compiling using msvs08 sp1
-Greg
Question
Answers
All replies
- int _valog(wchar_t* str, va_list* va){
if(!inited)
init();//init with default values
int size, ret;
wchar_t* tmp = new wchar_t[1024];
tmp[0]=L'\0';
ret = _vsnwprintf(tmp, 1022, str,(*va) );
if(ret==-1)
for(size=0; tmp[size]!=L'\0'&&size!=1024; size++);
else
size = ret;
tmp[size++] = L'\n';
ret = fwrite(tmp, sizeof(wchar_t), size, file);
fflush(file);
delete [] tmp;
return (ret==size)?size:-1;
}
also, I've changed the init to add the little-endian BOM at the begginging, same error. I've checked contents of L'\n' runtime and it's fine then.
the file: //direct link, hope it works, if not try next
-Greg
- The file does not seem to be opened in binary mode (with “b” flag instead of “t”). Check again, keep the BOM, and also execute this sequence: tmp[size++] = L'\r'; tmp[size++] = L'\n'; tmp[size++] = 0; at the end of each line.
- bool init(){
if(file)
fclose(file);
//try to open for reading
file = fopen("log.txt", "r");
if(file){
fclose(file);
file = fopen("log.txt", "ab");
}else{
char utf16[2] = {0xFF, 0xFE};
file = fopen("log.txt", "wb");
fwrite(utf16, 1, 2, file);
}
inited = true;
return true;
};
I dont deal with text files alot so I just went for a quick solution.
-Greg
Well, the quick solution is the wrong solution.
I dont deal with text files alot so I just went for a quick solution.
You are confusing two concepts. Many of the C Runtime calls come in pairs: fopen, _wfopen and fprintf, fwprintf, etc, depending on whether the parameters accept char* or a wchar_t* data types. This has nothing to do with the encoding of the file itself. For example, fwprintf() will print ANSI strings to a file that has been opened with ANSI encoding (the default), even though its parameters are specified as wide character strings. In your example, of fopen ("log.txt", "r") the encoding is ANSI.
For newly created files, you can specify the encoding. Wayne already pointed you in this direction. You can indicate which encoding you want by specifying either ccs=ANSI, ccs=UNICODE, ccs=UTF-8, or css=UTF_16LE in the fopen() call. Read more about it here. I find the documentation quite straightforward, but if you have questions, feel free to post them.
- Quote>if you have questions, feel free to post them.
OK Brian, I'll call you on that. ;-)
(1) The table in the help suggests that if UNICODE is used for a new file it will be written as ANSI.
I'm not seeing that result. The file is in fact being created as UTF-16LE whether it preexisted or not.
(2) You said:
Quote>fwprintf() will print ANSI strings to a file that has been opened with ANSI encoding
Quote>(the default), even though its parameters are specified as wide character strings.
In my experience, if you try to fwprintf a wchar_t string to a file opened as ANSI (no css
specified) you will get a runtime assert in debug mode or a fatal error in release mode.
- Wayne
P.S. - A small typo in your post: UTF_16LE should be UTF-16LE
- Follow-up: I can't seem to reproduce the symptoms I described in (2) above.
In my latest tests it does indeed seem to behave as you described.
Now I have to waste more grey cells trying to repro my earlier results,
which were definite and consistent previously.
- Wayne
(1) The table in the help suggests that if UNICODE is used for a new file it will be written as ANSI.
Wayne, I don't see that anywhere. Can you provide a quote I can search for?
P.S. - A small typo in your post: UTF_16LE should be UTF-16LEYup. Thanks.
- Brian -
One further question I meant to toss at you in my earlier post. You said:
Quote>You can indicate which encoding you want by specifying either ccs=ANSI, ...
I don't believe ccs=ANSI is acceptable. If that's what one wants, then they just omit the
ccs argument altogether. YMMV
- Wayne
Quote>I don't see that anywhere. Can you provide a quote I can search for?
At the link you provided, scroll down to the table "Encodings Used Based on Flag and BOM".
It says for Flag UNICODE and No BOM (or new file) encoding is ANSI.
- Wayne
My interpretation of that clause is that it applies to existing files being opened, as opposed to a new file. Indeed after reading the paragraph several times the documentation is confusing, if not outrightly wrong. However, when the flag is UNICODE and the file is new, a BOM is created and the encoding is UTF-16, which is what one would expect.
Follow-up to the Follow-up:
The debug assertion may have occurred under the following conditions:
file exists: no
fopen: ccs=UNICODE
fprintf (not fwprintf) of char (not wchar_t) string compiles clean but asserts in debug mode
- Wayne
This is very interesting, and I've confirmed this. The same behaviour is seen in VS2010 Beta 1. This doesn't appear to be documented anywhere. I'd venture to say that this is a bona fide bug. I can't see why there would be this restriction. Why not submit this to Microsoft Connect and see what they say, Wayne?
Nice catch!
I don't believe ccs=ANSI is acceptable. If that's what one wants, then they just omit theYes, you are correct.
ccs argument altogether. YMMV | https://social.msdn.microsoft.com/Forums/vstudio/en-US/fbfab936-3d00-4cb9-949b-dbff5010ad5d/unicode-newline-problem?forum=vclanguage | CC-MAIN-2016-40 | refinedweb | 1,053 | 72.87 |
On Fri, Apr 08, 2005 at 07:23:52PM +0200, Verdan wrote: > Hello > > > With the attached patch 'specter' can be compiled > > on amd64 using gcc-4.0. > > It's not so easy. It was done that way to allow specter compile on gcc > 2.95. This patch makes it impossible, so I hesitate to apply it. How is it relevant to the Debian package ? > Moreover fourth version of gcc is really experimental, and if I had to > choose specter compiling on 4 or 2.95 version I would choose 2.95. > For telling truth I don't know what to do now; upstream doesn't aprove > this patch and won't include it in 1.4pre2 version of specter. > I consider setting ,,wontfix'' tag. > What do You think about that ? Debian packages are not required to build with gcc 2.95. They are not required either to build with gcc 4.0 for sarge, but will be for etch. It is a better policy to be proactive than to wait until problems do happen. Moreover fixing this bug means fixing compilation with gcc-4.0, not applying the patch provided by the submitter. If the patch has issues you can reject it, and work with the submitter to write a better one. Furthermore, there is no technical reason why the patch cannot work with both 4.0 and 2.95. At worse a bit of cpp magic will fix that. In the case at hand you could add a macro ENTRY like #ifdef __GNUC__ #if __GNUC__ >= 4 #define ENTRY(base,name,dat) .base.name.dat = dat #else #define ENTRY(base,name,dat) .base.{name.dat = dat} #endif #endif and replace .value{.ptr = static_values.if_modified_since }} by ENTRY(value,ptr,static_values.if_modified_since) Cheers, -- Bill. <ballombe@debian.org> Imagine a large red swirl here. | https://lists.debian.org/debian-devel/2005/04/msg00413.html | CC-MAIN-2014-15 | refinedweb | 301 | 78.45 |
This blog has moved to please update your links!
Today we released the Windows Azure AppFabric CTP February release which introduces updates to the Caching service, and a new and improved Silverlight based portal experience.
This release builds on the prior Caching CTP October release, and introduces the following improvements:
In addition, we released a new Silverlight based portal which provides the same great experience as the Windows Azure and SQL Azure portals, and we have deprecated the old ASP.NET based portal.
Another enhancement we introduced in the new portal experience is to enable you to specifically choose which service namespaces get created. You can choose to create one or any of: Service Bus, Access Control or Caching.
To learn more about this CTP release read Wade Wegner's post on Windows Azure AppFabric CTP February, Caching Enhancements, and the New LABS Portal, and Vittorio Bertocci’s post on New Portal for the ACS Labs: Fewer Clicks, Happier Carpal Tunnels.
The updates are available here:, so be sure to login and check out the great new experience and capabilities.
The usage of the LABS CTP environment is free of charge, but keep in mind that there is no SLA.
Like always, we encourage you to check it out and let us know what you think through our Windows Azure AppFabric CTP Forum.
The Windows Azure AppFabric Team
Alas, the Silverlight client starts up with an exception. It doesn't finish loading the plugin manifest, just throws up an error a second or two after going to the portal and sits there, unusable. It's a Win 7 machine, with Silverlight 4 developer runtimes on it. Also happens on a ton of lab machines (with similar specs). I wish I could give more info, but the error message contains no details and is the same on every browser (IE9 beta, FF, Chrome etc).
Please use our Windows Azure AppFabric CTP Forum to find help and provide feedback: social.msdn.microsoft.com/.../threads.
When will this CTP go into production? Is there already a release date set?
Our CTPs are intended to give our customers a preview of the services and capabilities that we plan to release in upcoming releases, and gather feedback regarding these.
Both the new portal UI and the Caching service will be released to our production environment in the near future, but we have not disclosed the final dates yet. | http://blogs.msdn.com/b/windowsazureappfabric/archive/2011/02/08/windows-azure-appfabric-ctp-february-release-now-available.aspx | CC-MAIN-2014-15 | refinedweb | 403 | 59.94 |
Proland Documentation - Core Library
Introduction
Proland is a procedural landscape rendering library. It is designed to render in real-time very large landscapes, up to whole planets. In this context it is not possible to store in GPU memory the whole landscape data. Instead this data must be produced on the fly for the current view. This can be done by loading precomputed data from disk, or by generating it with procedural methods. Another goal of Proland is the real-time editing of landscapes. This is realized by regenerating the landscape data on the fly, in the same way it is generated on the fly when the viewer moves.
Proland is made of a core library, extended by several plugins. The core library provides a producer framework, a terrain framework, and a basic user interface framework.
The producer framework defines a common interface for all data producers, such as CPU or GPU producers, raster data or vector data producers, etc. Producers can use other producers to produce their data and can then be composed in complex ways (for instance a terrain normal producer can produce normals from the elevations produced by an elevation producer). The producer framework also provides a generic cache component to store produced data. This is used to take advantage of the temporal coherence: thanks to this cache, a data produced for one frame can be reused for the subsequent frames.
The terrain framework uses a terrain quadtree that is dynamically subdivided, based on the current viewer position. It also provides a new GLSL uniform type that can be used to access a cache of raster data on GPU, and some methods to update these caches (by using data producers) and to draw a terrain. It also provides deformations to map a flat terrain to other shapes, such as spheres (to render planets).
- the user interface framework is based on EventHandlers. It provides the basics for navigating through the large scenes that you can display in Proland.
The Proland plugins provide several predefined producers based on this framework: some producers are dedicated to the production of terrain elevation data, others are designed to produce generic raster data (this data can represent anything you want, such as reflectances, land cover classes, normal maps, horizon maps, ambient occlusion maps, etc), and others produce vector data (it is also possible to produce raster data from vector data, by rasterizing the vector data into textures).
Proland is based on the Ork library in several ways:
the producer framework uses Ork tasks to produce data. Hence the landscape data can be produced in parallel, or even ahead of time with the prefetching feature of the Ork scheduling framework. This framework also provides the dependencies between producer tasks. Hence when a data produced by a producer is edited, all the data derived directly or indirectly from it, via other producers, is also automatically recomputed.
the terrain framework uses the Ork rendering framework for shaders, meshes, etc. and extends the Ork scene graph with new methods to update GPU caches and to draw terrains.
- finally Proland extends the Ork resource framework with new resource types for the predefined producers and for the terrain components.
The following sections present the producer framework, the terrain framework and the user interface framework:
- Producer framework
- Terrain framework
- User Interface
Producer framework
The producer framework defines how the landscape data is produced, stored and cached. Using this framework it is possible to define several producers, each producer producing a part of the landscape data. For instance there can be a producer for the terrain elevation, another for the terrain normals, a producer for the terrain reflectance, another for the river data, a producer for building models, etc. Each producer can use as input procedural parameters, data stored on disk, data produced by another producer, a combination of these, etc.
The producer framework assumes that the data produced by a producer is divided in a quadtree. This means that each tile, i.e., the data associated with a quad, can be produced independently. A tile can contain raster data, vector data, or any other data. Each producer can organize its data using its own quadtree, i.e., the quadtrees of the various producers need not have the same characteristics (maximum depth, tile size, etc). However the quads and tiles are always identified using the same "coordinate system", whatever their quadtree. In fact a quad or tile is identified by its level in the quadtree (0 is the root), and by its tx,ty coordinates at this level (tx and ty varying between 0 and 2level-1 with 0,0 being the lower left corner). These (level,tx,ty) coordinates are called logical coordinates:
The root tile (0,0,0) of a producer contains the data corresponding to the whole landscape, at a coarse resolution. The tiles at the other levels contain only a part of the data, but at higher resolution (the higher the level, the higher the resolution). The producer framework also uses physical coordinates. These coordinates are the ox,oy coordinates of the lower left corner of a quad in a fixed reference frame, whose origin is at the center of the root quad, plus the size of the quad l in some length unit (e.g., meters). The figure below illustrates this, assuming that the size of the root quad is L:
These physical coordinates are only local coordinates, like the local reference frame of each scene node in an Ork scene graph. At rendering time the landscape can be placed anywhere in the world frame with appropriate translations, rotations and other transformations.
- Note:
- there is a clear distinction between a quad and a tile. A quad is a node in a quadtree, a tile is some data associated with a quad. Logical coordinates apply to both quads and tiles, but physical coordinates are only associated with quads. In fact a tile can contain data outside the physical boundary of its associated quad. In this case we say that the tile has a non empty border. Tiles with borders introduce some redundancy in the produced data, but this redundancy is sometimes useful to avoid artifacts with texture filtering, to avoid producing neighboring tiles, etc. The figure below illustrates this difference with tiles containing raster data.
Tiles are stored in tile storages. There are tile storages for raster data tiles on GPU (using textures), tile storages for raster data on CPU (using arrays), and tile storages for vector data or other CPU data (using ork::Object). It is also possible to define tile storages on GPU using GPU buffers, for instance vertex buffers (for instance the PlantsProducer produces one point mesh for each quad, with all these meshes stored in a single GPU buffer - see the source code of proland::PlantsProducer).
A tile storage can contain tiles produced by several producers. In other words several producers can use the same storage to store their tiles. A tile storage can contain tiles that are necessary for the current view, but it can also contain tiles that were created for a previous frame but are no longer used. If the viewer goes back to the previous viewpoint, then these tiles will be reused directly: they will not need to be produced again. Similarly a tile storage can contain tiles that are prefetched, i.e., that are produced ahead of time for future frames.
The knowledge of which tiles are in use, i.e., necessary for the current view, and which are not (cached from a previous frame, or prefetched for a future frame), is managed by a tile cache. A tile cache also stores a mapping between the logical coordinates of tiles, and their storage coordinates, i.e., their location in the tile storage. Like tile storages, tile caches can be shared between tile producers. The figure below illustrates the relation between a tile cache and a tile storage, using a GPU tile storage (the storage coordinates format and meaning depend on the kind of storage used. For a GPUTileStorage using textures, it is a layer index - the storage texture is a 2DArrayTexture, with one tile per layer).
In this example, the tile cache indicates that the tiles (2,1,2) and (2,1,3) are in use, and are stored in the tile storage, in the layers 3 and 1. It also indicates that 3 other tiles are available in the tile storage but are currently unused, i.e., not necessary for the current view. The tile storage stores the tiles in a 2D texture array with 7 layers. Each layer is a 8x8 2D texture, also called a slot. Currently only 5 slots are allocated, the remaining slots are free to store other tiles. Note that an allocated slot (at the storage level) can correspond to an unused tile or to a tile in use (at the cache level).
Tile storage
A tile storage is represented with the proland::TileStorage class. This abstract class has 3 sub classes proland::GPUTileStorage, proland::CPUTileStorage and proland::ObjectTileStorage, for GPU raster data, CPU raster data, and CPU vector or other data (respectively - you can also implement your own subclass, see for instance the source code of proland::PlantsProducer, which defines a GPU tile storage based on a vertex buffer object). Each tile storage has a capacity which is the number of slots in this storage, each slot being able to store one tile. The capacity of a storage is fixed and cannot be changed at runtime. Each slot can either be free or allocated. A free slot does not contain any tile, an allocated slot contain a single tile (either in use or not).
The capacity can be retrieved with proland::TileStorage::getCapacity. The number of free slots is given by proland::TileStorage::getFreeSlots. A free slot can be obtained with proland::TileStorage::newSlot. The returned slot is then considered allocated, and can be used to store a tile. Conversely an allocated slot can be returned to the pool of free slots with proland::TileStorage::deleteSlot.
GPUTileStorage
The proland::GPUTileStorage is a tile storage to store raster data tiles on GPU. It uses 2D textures or 2D array textures to store the tiles. Such a tile storage can be created with the Ork resource framework, using the following format:
<?xml version="1.0" ?> <gpuTileStorage name="myGpuStorage" tileSize="196" nTiles="512" internalformat="RGBA8" format="RGBA" type="UNSIGNED_BYTE" min="LINEAR_MIPMAP_LINEAR" mag="LINEAR" minLOD="0" maxLOD="1" tileMap="false"/>
In this example each slot is created to store tiles made of 196x196 pixels (this size must include the tile borders, if any). The total number of slots is 512. In other words this storage allocates a 196x196x512 2D array texture. This texture uses the RGBA8 internal format (this gives a total of 71.2 MB). The texture filters and min and max LOD are specified like for texture resources. The
tileMap attribute in explained in the terrain framework section.
The slots managed by a GPU tile storage are described with the proland::GPUTileStorage::GPUSlot class. This class describes the location of the slot in the storage textures. It also provides methods to copy a part of the framebuffer or of a texture into this slot.
- Note:
- If you use a mipmap filter, then each time the content of a slot is changed you must call proland::GPUTileStorage::notifyChange (this is used to automatically update the mipmap levels of the storage textures when changes have occurred). In fact you don't have to do this yourself, unless you write your own producer.
CPUTileStorage
The proland::CPUTileStorage is a tile storage to store raster data tiles on CPU. It uses arrays to store the tiles (each tile is stored in its own array). Such a tile storage can be created with the Ork resource framework, using the following format:
<?xml version="1.0" ?> <cpuByteTileStorage name="myCpuStorage" tileSize="196" channels="4" capacity="1024"/>
In this example the storage can store 1024 tiles made of 196x196 pixels, with 4 bytes per pixel. Using
cpuFloatTileStorage instead, the pixels would have been made of 4 floats per pixels.
The slots managed by a CPU tile storage are described with the proland::CPUTileStorage::CPUSlot class. This class gives access to the array containing the tile raster data.
ObjectTileStorage
The proland::ObjectTileStorage is a tile storage to store arbitrary data tiles on CPU. Each slot stores a pointer to an ork::Object. Unlike the GPU and CPU tile storages, here the data for each slot is not allocated by the tile storage (since it can be arbitrary). Instead, you must allocate this data manually each time you get new slot, and you must delete it manually each time you delete a slot. Such a tile storage can be created with the Ork resource framework, using the following format:
<?xml version="1.0" ?> <objectTileStorage name="myObjectStorage" capacity="1024"/>
The slots managed by an object tile storage are described with the proland::ObjectTileStorage::ObjectSlot class. This class gives access to the tile data via a pointer.
Tile cache
A tile cache is represented with the proland::TileCache class. Since a tile cache does not store tiles itself (this is done by tile storages), but only stores and manages a mapping between logical tile coordinates and slots in a tile storage, a single class can be used for all kinds of tiles.
A tile cache has an associated tile storage. It manages a mapping between logical tile coordinates and slots in its associated storage. A tile cache is used by one or more tile producers. Each time a tile producer is created and associated with a tile cache, it gets a local producer id from the tile cache, and the tile cache maintains a reference to this producer. This reference is used when a new tile from this producer is requested from the cache, in order to produce it.
The tiles managed by a tile cache can be in use or not (tiles in use generally correspond to those that are necessary to render the current landscape view). More precisely a tile cache keeps track of the number of users of each tile. Users acquire tiles with the proland::TileCache::getTile method, and release them with proland::TileCache::putTile. Hence the first method increments the counter of users of the requested tile, and the second method decrements this counter. When this counter becomes 0 the tile becomes unused.
A tile in use is "locked", i.e., its slot in the tile storage cannot be reused to store another tile, as long as the tile is in use. On the contrary, a unused tile can be evicted from the cache at any moment, and its slot can be reused to store another tile. This happens, in particular, when a new used tile is needed, but all slots in the storage are allocated. Then it is necessary to evict an unused tile from the cache, in order to reuse its slot to store the new used tile. An evicted tile will need to be produced again if it is needed again in the future. In order to minimize the number of times a tile is regenerated, a tile cache evicts in priority the tiles that have not been used since a long time (this heuristic is called the Least Recently Used - or LRU - cache heuristic).
In addition to getTile and putTile, the main methods of a tile cache are the following:
the proland::TileCache::findTile method can be used to find a tile in the cache. This method does not change the number of users of the returned tile. The tile can be looked for in the list of tiles that are in use, or in all the tiles managed by the tile cache, whether they are used or not.
the proland::TileCache::prefetchTile method is used to request the production of a tile for the future frames. The method returns immediately. The tile will be created as an unused tile. If there is no free slot to store this prefetched tile, an unused tile will be evicted first.
- the proland::TileCache::invalidateTiles method is used to force the regeneration of the tiles produced by a given producer. All the tiles keep their current slot in the tile storage, but the slot content will be recomputed before use (this means that tiles in use will be recomputed immediately, while unused tiles will not be recomputed before they become in use again).
When a tile is requested with getTile two cases can happen:
if the tile is in cache its number of users is incremented by one. If the tile was unused it becomes in use.
- otherwise the tile is produced, in a free storage slot or in the slot of a previously evicted (and unused) tile. In fact the tile is not produced immediately. Instead a ork::Task to produce the tile is returned (inside a proland::TileCache::Tile). This task needs to be executed by a ork::Scheduler before the tile data can be used.
TileCache resource
A tile cache can be loaded with the Ork resource framework, using the following format (the nested storage resource can of course be a
cpuXxxTileStorage or an
objectTileStorage; it describes the tile storage associated with the tile cache):
<?xml version="1.0" ?> <tileCache name="myCache" scheduler="myScheduler"> <gpuTileStorage .../> </tileCache>
Tile producer
A tile producer is represented with the proland::TileProducer class. This abstract class has many concrete sub classes, presented in the sec-producers section. A tile producer has an associated tile cache, used to cache the tiles it produces (it is given by proland::TileProducer::getCache). A tile producer is associated with a single cache, but a cache can be used by several producers (if they produce tiles of the same type). In order to distinguish tiles produced by different producers in a tile cache, each producer has a unique local identifier in this cache, automatically assigned when the producer is created. It is given by proland::TileProducer::getId.
The main method of a tile producer is the proland::TileProducer::doCreateTile abstract method. It implements the tile production algorithm, i.e., it defines how the tiles are produced. However this method is never called directly, but through the proland::TileProducer::getTile method. If a requested tile is not in the cache, getTile does not produce this tile immediately. Instead a ork::Task to produce this tile is returned.
A "basic" tile producer returns a "basic" ork::Task to produce a tile, i.e. a task without dependencies on other tasks. On the contrary, a tile producer that uses as input data tiles produced by another producer returns a ork::TaskGraph to produce a tile. This task graph contains the task to produce the tile, with dependencies to the task(s) that produce the input data. Consider for example a normal producer, producing terrain normals from terrain elevations produced by an elevation producer. We then have the following producers, caches and storages:
Then the task graphs returned by the getTile method of the normal producer look like this:
The task N(2,1,2) to produce the normal tile (2,1,2) has a dependency towards the task E(2,1,2) that produces the elevations for the same quad, and both are put in a task graph.
In practice the elevation producer is not a "basic" producer: it uses as input data tiles produced by a residual producer (see sec-producers). It also uses itself recursively: in fact an elevation tile is produced from its parent elevation tile, by upsampling its data and adding to it residual elevations. An elevation producer may also use vector data produced by a graph producer (via layers - see below), which also uses itself recursively. So in fact we have the following relations between the normal, elevation, residual and graph producers (here we do not show the associated caches and storages):
The task N(2,1,2) to produce the normal tile (2,1,2) is then a more complex task graph (in reality the graph is even more complex, because a normal producer also uses itself recursively, like the elevation and graph producers):
At the highest level we still have a task graph with two nested tasks N(2,1,2) and E(2,1,2), with a dependency between them, as in the previous example. Here however the E(2,1,2) task is itself a task graph. Note the "fractal" aspect of the whole graph: this comes from the recursive use of the elevation producer by itself, and of the graph producer by itself. Note also that this double recursion leads to tasks that appear in several task graphs, i.e., that are shared between task graphs (there is only one instance of each shared task).
Successive calls to the getTile method for the same tile, always return the same ork::Task instance. Since a task instance is executed only once (see Task graph), the complex task graph above, once scheduled, will generally not lead to the execution of all the tasks it contains. Indeed most tasks will probably have already been executed, and so will not be reexecuted. However, if any task in this graph is explicitly rescheduled (this happens if the corresponding tile is invalidated with proland::TileProducer::invalidateTiles), the Ork framework will automatically reschedule the tasks that depend directly or indirectly on the rescheduled task. Hence all the tiles depending directly or indirectly on the invalidated tile will be automatically recomputed.
Main methods
The proland::TileProducer class provides some generic methods that provide information about the tiles it can produce:
the proland::TileProducer::isGpuProducer indicates whether this producer produces raster data tiles on GPU or not. A GPU producer is supposed to use a tile cache associated with a proland::GPUTileStorage.
the proland::TileProducer::getRootQuadSize gives the physical size of the root quad of the quadtree managed by this producer. This size can be set with proland::TileProducer::setRootQuadSize.
the proland::TileProducer::getBorder method indicates if the tiles produced by this producer have a border. More precisely it indicates the size of these borders, in pixels (assuming that tiles contain raster data). The default implementation of this method returns 0 (i.e., no border), but you can override it.
the proland::TileProducer::hasTile method indicates if this producer can produce a tile, specified by its logical coordinates. By default this method returns always
true, but you can override it. For instance, it is common for producers to be limited to a maximum resolution, i.e., to a maximum level in the quadtree.
- the proland::TileProducer::hasChildren method indicates if this producer can produce the sub tiles of a tile, specified by its logical coordinates. The producer framework assumes that if a sub tile of a tile can be produced, then the four sub tiles of this tile can be produced. Hence this method returns
trueif the lower left sub tile of the specified tile can be produced (as returned by hasTile).
There is also a proland::TileProducer::update method, which is called once per frame (via proland::UpdateTileSamplersTask and proland::TileSampler::update). This method does nothing by default, but you can override it to invalidate the tiles if necessary (i.e., if some input data used to produce the tiles has changed - if this input data is produced by another producer then you don't need to do this, it will be done automatically via the Ork tasks framework). Alternatively, if the produced data must be animated, you can also modify, via this method, the content of already produced tiles at each frame.
Finally the proland::TileProducer class also provides some convenient methods that simply call the corresponding methods on its associated tile cache. These methods are:
- proland::TileProducer::getTile
- proland::TileProducer::putTile
- proland::TileProducer::findTile
- proland::TileProducer::prefetchTile
- proland::TileProducer::invalidateTiles
Tile producer layers
Some tile producers can be customized with layers. A layer can modify the data produced by the "raw" producer or by the previous layers. For instance you can imagine a layer to draw roads from vector data on top of a satellite photo (the "raw" data), or a layer to modify the terrain elevations, based on the same road vector data, to generate the footprint of roads in the terrain. Like a producer, a layer can use as input data produced by other producers (in the previous example, the layers use vector data produced by a graph producer). The layers of a tile producer are managed with the proland::TileProducer::getLayerCount, proland::TileProducer::getLayer and proland::TileProducer::addLayer methods.
User defined producers
You can define your own tile producers by extending the proland::TileProducer class. This section shows how you can do this, using the example of a GPU producer using as input tiles produced by a CPU producer.
class MyProducer : public TileProducer { public: MyProducer(Ptr<TileCache> cache, Ptr<TileProducer> input, ...) : TileProducer("MyProducer", "MyCreateTile") { init(cache, input, ...); } virtual ~MyProducer() { } protected: MyProducer() : TileProducer("MyProducer", "MyCreateTile") { } void init(Ptr<TileCache> cache, Ptr<TileProducer> input, ...) { TileProducer::init(cache, true); this->input = input; ... }
The above code contains the initialization code to create our producer, using the pattern to easily define an Ork resource subclass of this class (see User defined resources). The constructor takes as argument a tile cache, that will be used to cache the produced tiles. This argument is required by the constructor of the super class. The constructor also takes as argument the producer whose tiles will be used as input. We assume it is a CPU producer.
virtual Ptr<Task> startCreateTile(int level, int tx, int ty, unsigned int deadline, Ptr<Task> task, Ptr<TaskGraph> owner) { Ptr<TaskGraph> result = owner == NULL ? new TaskGraph(task) : owner; TileCache::Tile *t = input->getTile(level, tx, ty, deadline); result->addTask(t->task); result->addDependency(task, t->task); return result; }
The
startCreateTile method overrides the corresponding method of the super class. Its role is to construct the task or graph of tasks to produce a given tile. Here our producer use as input a tile produced by another producer, so the "basic" task to produce our tile - automatically constructed and passed as argument in
task - must have a dependency on this input task (so that it is executed before we start producing our tile). This is why this method creates a task graph containing
task, as well as the task
t to produce the input. Its then adds these tasks in a task graph, and creates a dependency between them. Note that
t is obtained with a
getTile: this will lock this input tile until we call
putTile, i.e., the input data will not be evicted from the cache unexpectedly.
virtual bool doCreateTile(int level, int tx, int ty, TileStorage::Slot *data) { CPUTileStorage<unsigned char>::CPUSlot *in; GPUTileStorage::GPUSlot *out; TileCache::Tile *t = intput->findTile(level, tx, ty); in = dynamic_cast<CPUTileStorage<unsigned char>::CPUSlot*>(t->getData()); out = dynamic_cast<GPUTileStorage::GPUSlot*>(data); ... getCache()->getStorage().cast<GPUTileStorage>()->notifyChange(out); }
The
doCreateTile method overrides the corresponding method of the super class. Its role is to generate the requested tile. Here this method first gets the input tile needed for this production. Note that is does so with
findTile: we are sure the tile is in cache because it can not be evicted until we call
putTile (see above). It then gets the slot in which the tile must be produced. This slot is passed as argument in
data but must be cast to the right type. Once the tile is produced (by the "dots"), the producer notifies its tile storage that the slot in which the tile has been produced has changed, so the mipmap levels of the storage textures can be automatically updated when needed (see GPUTileStorage).
virtual void stopCreateTile(int level, int tx, int ty) { TileCache::Tile *t = input->findTile(level, tx, ty); input->putTile(t); } private: Ptr<TileProducer> input; };
The
stopCreateTile method overrides the corresponding method of the super class. Its role is to clean up the "resources" used during the tile production. Here this method calls
putTile on the tile used as input, since the content of this tile is no longer necessary. The effect of this call is to unlock this input tile (if it was not locked by other users), which can then be evicted from its cache at any moment.
User defined tile producer layers
You can also define your own tile producer layers by extending the proland::TileLayer class. This task is very similar with the definition of a tile producer. In particular a tile producer layer has the same
startCreateTile,
doCreateTile and
stopCreateTile methods, which have the same role and can be overridden in the same way.
Terrain framework
The terrain rendering framework manages one or more terrains, each terrain being associated with a set of tile producers. For each terrain, the terrain quadtree is dynamically subdivided, based on the current viewer position. When new quads are subdivided, the producers associated with the terrain are used to produce the corresponding tiles. The terrain framework also provides new GLSL uniforms that allow shaders to access the slots of a texture cache like a normal texture. Hence you can access the produced tiles like normal textures in your shaders. Finally the framework provides deformations to map a flat terrain to other shapes, such as spheres (to render planets).
- Note:
- here we speak about "terrains" but in fact the framework is not limited to the terrain itself. Indeed the tile producers associated with a terrain can produce any kind of data, including data to render 3D vegetation or buildings on top of the terrain (see the "trees1" example). Hence the "terrain" framework can be used to render full landscapes.
Terrain deformation
The terrain framework supports terrain deformations. A deformation here is not a local terrain modification. Instead, it is a global deformation of space, which can for instance transform a plane into a sphere or a cylinder. Terrain deformations are used to generate spherical planets, cylindrical terrains (e.g., for a cylindrical space ship whose rotation simulates gravity), etc.
A deformation transforms a point in a local space into a point in a deformed space:
In practice the local space is the space in which the quad physical coordinates are defined - see Producer framework. In the local space the "sea level" surface is the plane z=0, z being the vertical axis. This plane can be deformed into a sphere, a cylinder, etc. Note however that a deformation transforms the whole 3D space, not a single 2D surface (this is needed to transform points above the sea level).
The proland::Deformation class represents a terrain deformation. It defines the methods that a terrain deformation must provide, and implements them for the case of the identity deformation (i.e., no deformation). The proland::SphericalDeformation is a sub class of this class that deforms horizontal planes into spheres. Finally the proland::CylindricalDeformation is a sub class of this class that deforms horizontal planes into cylinders. Note that you can define your own sub classes if needed.
The actual deformation is implemented by the following methods:
proland::Deformation::localToDeformed: transforms a point in the local space into the deformed space.
proland::Deformation::deformedToLocal: transforms a point in the deformed space into the local space.
proland::Deformation::localToDeformedDifferential: computes the differential of the deformation function at some local point. This differential gives a linear approximation of the deformation around a point: if p is near localPt, then the deformed point corresponding to p can be approximated with
localToDeformedDifferential(localPt) * (p - localPt).
- proland::Deformation::deformedToTangentFrame: computes an orthonormal reference frame of the tangent space at a deformed point. This reference frame is such that its xy plane is the tangent plane, at deformedPt, to the deformed surface corresponding to the local plane z=cste. This orthonormal reference frame does not give the differential of the inverse deformation function, which in general is not an orthonormal transformation. This tangent frame defines the tangent space in which terrain normals are computed.
The proland::Deformation class is also responsible to set the GLSL shader uniforms that are necessary to transform the terrain vertices on GPU. This is done with the proland::Deformation::setUniforms methods. There are two such methods: the first one can set uniforms that do not depend on a quad (such as a sphere radius for a spherical deformation), while the other can set uniforms that are specific to a quad. The GLSL uniforms that are set by these methods depend on the actual transformation.
Spherical deformation
The proland::SphericalDeformation deforms horizontal planes into spheres. It is intended to render planets of radius R (at sea level), using 6 terrains placed on the faces of a cube of size 2R x 2R x 2R, each terrain being deformed into a portion of the sphere:
Mathematically, the deformation is defined as follows: from a point p=(x,y,z) in local space, we first construct the point P=(x,y,R) on the "top" (i.e., "north") cube face (in green in the above figure), in the planet frame (a reference frame whose origin is the planet center). This point is then used to define the deformed point, in the planet frame, as q=(R+z) P / ∥ P ∥:
This deformation maps the plane z=0 into a half-sphere. Hence at least two terrains are needed to cover the whole sphere. In order to limit deformations, it is better to use 6 terrains on the face of a cube, as shown above. The inverse deformation maps the whole sphere, except the south pole "face", to a developed cube in a plane (like for a cube map - see above figure). For the "north" face, the inverse deformation is:
The tangent frame at some deformed point q, in which terrain normals are computed, is defined by the following unit vectors in planet frame:
GLSL uniforms
In theory the transformation q = (R+z) P / ∥ P ∥ can be easily implemented on GPU. There are however two precision problems, even with 32 bits floats. They are linked to the fact that transformed points are computed in a reference frame whose origin is at the planet center. Hence for a planet like the Earth, the coordinates of transformed points are large (of the order of R=6360000m) and do not have enough bits left to represent the altitude precisely. The other problem is when these coordinates are transformed into the reference frame of the camera, whose origin must also be expressed in the planet frame (subtracting two large numbers close to each other leads imprecise results).
In order to solve these two problems, the idea is to compute the deformed quad corners and to transform them in the camera frame on CPU, using double precision. The result are points with "small" coordinates, that can easily be interpolated on GPU without precision problems. More precisely we compute on CPU the deformed quad corners ci (i=1,...4) and the vertical vectors ni at these corners, in the camera frame, and we use them on GPU. The idea is to compute a deformed vertex as an interpolation of the deformed corners, displaced along an interpolation of the vertical vectors:
We note pi the corners of a quad in the local space, and ci the corresponding deformed points (ci = R Pi / ∥ Pi ∥, where Pi = (pix,piy,R) in the planet frame). We also note ni the deformed vertical vectors (ni = P i / ∥ Pi ∥ in the planet frame). We want to compute the deformed point corresponding to a local point p defined as p= ∑ αi pi + (0,0,h) (with ∑ αi = 1). And we want to express this deformed point q as
Finally we want that ∑ α'i = 1 so that the above formula holds in any reference frame. The unknowns α' i and h' can be computed by writing the above relation in the planet frame, and by comparing it with the definition of q in this frame, q = (R+h) P / ∥ P ∥:
We can see that with α'i = k αi ∥ Pi ∥ / ∥ ∑ αi Pi ∥ the first two lines become k.(R + h') = (R + h). We can then compute h' from k, and use the third equation to compute k. We get:
We compute on CPU with double precision the deformed corners and verticals ci and ni, expressed directly in screen space (i.e., after transformation in the camera frame, and after the perspective projection). We also compute on CPU the norms ∥ Pi ∥. The proland::Deformation::setUniforms method passes these values in the
screenQuadCorners and
screenQuadVerticals mat4 uniforms, and in the
screenQuadCornerNorms vec4 uniform. The shader can then compute the screen space coordinates of the deformed vertices of the quad mesh with the following code (see the "terrain2" example for a concrete usage of this code - we assume that the "zfc" variable contains the elevation values zf,zc,zm for the current vertex):
float R = deformation.radius; mat4 C = deformation.screenQuadCorners; mat4 N = deformation.screenQuadVerticals; vec4 L = deformation.screenQuadCornerNorms; vec3 P = vec3(vertex.xy * deformation.offset.z + deformation.offset.xy, R); vec4 uvUV = vec4(vertex.xy, vec2(1.0) - vertex.xy); vec4 alpha = uvUV.zxzx * uvUV.wwyy; vec4 alphaPrime = alpha * L / dot(alpha, L); float h = zfc.z * (1.0 - blend) + zfc.y * blend; float k = min(length(P) / dot(alpha, L) * 1.0000003, 1.0); float hPrime = (h + R * (1.0 - k)) / k; gl_Position = (C + hPrime * N) * alphaPrime;
This code first computes P= ∑ αi Pi in
P. It the computes the αi in
alpha, based on the xy vertex coordinates (supposed to vary between 0 and 1 in the quad). It then computes the α'i in
alphaPrime, computes h and k, computes h' from them in
hPrime, and finally computes the result by interpolation using the
alphaPrime coefficients.
The proland::Deformation::setUniforms method also sets a
tangentFrameToWorld mat3 uniform (using the ux, uy and uz defined above) that can be used to transform terrain normals expressed in the tangent frame at the center of the quad, into the planet frame:
vec3 Ntangent = ...; // fetches normal in tangent space vec3 Nworld = deformation.tangentFrameToWorld * Ntangent;
Terrain quadtree
Distance-based subdivision
The terrain framework represents a terrain with a quadtree that is dynamically subdivided based on the current viewer position, in order to provide more details near the viewer. This subdivision is only based on the distance from the viewer to quads, i.e., it does not depend on the "complexity" of the data tiles for these quads. More precisely, a quad is subdivided if its distance d to the viewer is less than k times its size L, where d is not an Euclidian distance, but a max(dx,dy) distance:
We call k the split distance factor. If you want to get a restricted quadtree, i.e., a quadtree in which the difference between the level of two neighbor quads is always 0 or 1, then k must be larger than 1:
Increasing the value of k means that quads are subdivided sooner, and appear smaller on screen. Hence you can tune the value of k to get a given resolution on screen. For instance, if each quad is rendered with a texture of TxT pixels, the projected size of these texture pixels on screen will be at most W/(2k.T.tan(fov/2)), where W is the screen width in pixels and fov is the field of view angle. The figure below gives an example of the result of this quadtree subdivision rule for several values of k above 1 (see also the "helloworld" example):
- Note:
- in practice the distance between a quad and the viewer also involves altitudes: d=max(min(|x-ox|,|x-ox-L|), min(|y-oy|,|y-oy-L|), z-groundz), where z-groundz is the height of the camera above the ground. Note also that this distance is computed in the local space, not in the deformed space (see above). For this the camera position is transformed from the deformed space to the local space.
Continuous level of details
When a quad is subdivided, popping can occur because the quad is suddenly replaced with 4 sub quads with new associated data. Hopefully, with k > 1, it is possible to do a progressive fading in of the new sub tile data, and a corresponding fading out of the old parent tile data. This replaces a sudden transition with a progressive blending, much less visible. This blending can be done as follows: at some point x,y in a quad (ox,oy,l), if the viewer is at (cx,cy), then the blending coefficient defined as:
can be used to mix the old parent tile data xparent and the new sub tile data xchild with:
Indeed when the viewer is at the minimal distance to the quad, dmin=kl, we get blend = clamp(-1/(k-1), 0, 1) = 0, and x = xchild. Inversely, when the viewer is at the maximal distance to the quad, dmax=(2k+1)l, we get blend = clamp(k/(k-1), 0, 1) = 1 and x = xparent. It is easy to compute the distances at which the clamping occurs, which gives the width of the transition region between blend = 0 and blend = 1. This width is equal to (k - 1)l, which shows that the larger is k, the larger is the transition region, and the less noticeable is the transition. This is illustrated below:
GLSL uniforms
The proland::Deformation::setUniforms method sets two uniforms that can be used to compute the above blending coefficient on GPU. The
deformation.camera vec4 uniform stores the camera position, relatively to the quad lower left corner ox,oy and divided by the quad size l. The
deformation.blending vec2 uniform stores k+1 and k-1. Using these uniforms, the blend coefficient can be computed as follows (the vertex coordinates are supposed to vary between 0 and 1 in the quad):
vec4 c = deformation.camera; // (cx-ox)/l, (cy-oy)/l, (cz-groundz)/l vec2 k = deformation.blending; // k+1, k-1 vec2 v = abs(c.xy - gl_Vertex.xy); float d = max(max(v.x, v.y), c.z); float blend = clamp((d - k.x) / k.y, 0.0, 1.0);
Terrain classes
A terrain quadtree is represented with a tree of proland::TerrainQuad objects. Each object of this class provides the following fields:
parentgives a pointer to the parent quad.
level,
txand
tygive the logical coordinates of the quad.
ox,
oxand
lgive the physical coordinates of the quad.
zminand
zmaxgive the minimum and maximum elevations on the quad.
childrenis an array of four pointers to the sub quads of this quad. It contains either four NULL pointers if the quad is a leaf, or four non NULL pointers to four sub quads (in the bottom left, bottom right, top left, top right order) if this quad is subdivided.
visibleindicates if this quad is invisible, partially visible or fully visible from the viewer. This field is updated by the proland::TerrainQuad::update method.
The proland::TerrainNode class represents a terrain. It contains a pointer to the root of the terrain quadtree in proland::TerrainNode::root. It also contains the following fields:
splitDistis the split distance factor k used for distance based subdivision (see above).
maxLevelis the quadtree level at which the subdivision must be stopped.
deformis the terrain deformation used for this terrain (see above).
Internally, a proland::TerrainNode stores the current viewer position and the current view frustum planes. These current values can be retrieved in the local and deformed spaces (see Terrain deformation) with proland::TerrainNode::getDeformedCamera, proland::TerrainNode::getDeformedFrustumPlanes, and proland::TerrainNode::getLocalCamera. They are updated by the proland::TerrainNode::update method, which takes as argument a scene node defining the terrain position in world space (and from which the camera position can also be retrieved).
Terrain resource
A proland::TerrainNode can be loaded with the Ork resource framework, using the following format:
<terrainNode name="myTerrain" size="6360000" zmin="0" zmax="10000" deform="sphere" splitFactor="2" maxLevel="16"/>
This resource describes a terrain whose root quad has a size of 12720km x 12720km (12720 = 2*6360), whose elevations are between 0 and 10000m, using a spherical deformation (of course the length unit can be interpreted as you want). The terrain quadtree will be subdivided with a split distance factor k=2 (for a field of view of 80 degrees, and a viewport width of 1024 pixels. For a smaller field of view and/or a larger viewport, subdivisions will automatically occur at a larger distance, so that the size of a quad in pixels stays more or less the same), up to quadtree level 16 (included). Note: currently the optional
deform attribute only supports the
none and
sphere values. In the case of a spherical deformation, the planet radius is set to
size. The "terrain1" and "terrain2" examples illustrate how terrain nodes for flat and spherical terrains can be used.
Texture tile samplers
A proland::TerrainNode only stores the current quadtree of a terrain, subdivided based on the distance to the current viewer position. It does not store any data associated with this quadtree. Indeed the terrain or more generally the landscape data is produced by tile producers, and stored in tile storages managed by tile caches (see Producer framework). We therefore need a link between terrains and tile producers, so that producers are asked to produced new tiles when terrain quads are subdivided. This link is provided by the proland::TileSampler class.
A proland::TileSampler is associated with a single GPU tile producer. Its first role is to ask this producer to produce new tiles when a terrain quad is subdivided. Its second role is to set GLSL uniforms to allow a shader to access a texture tile in the tile storage used by this producer.
The first role is performed by the proland::TileSampler::update method. This method takes as argument the root of a terrain quadtree. It compares this quadtree with its previous value during the last call to this method. Then, for each new quad, it asks the associated producer to produce the corresponding tile, with getTile. Conversely, for each old quad (i.e., quads that are no longer part of the quadtree), it informs the producer that the corresponding tile is no longer used, by calling putTile. This ensures that the tile data is "locked" (see Tile cache) in the tile storage as long as the corresponding quad exists.
- Note:
- in fact the proland::TileSampler::update method returns a task graph containing all the tasks to produce the new tiles that must be produced. This task graph must be scheduled for execution in order to actually produce the tiles.
In practice it is not always necessary to produce a tile for each quad in the quadtree. For instance it is often sufficient to produce tiles for the leaf quads only, i.e., those that do not have sub quads, which are those that are effectively rendered. It is also common to produce tiles for the visible quads only, i.e., those that are fully or partially visible in the view frustum (see Terrain classes). In order to specify if a tile must be produced or not for a given quad, you can use the following configuration methods:
proland::TileSampler::setStoreLeaf indicates whether a tile must be produced or not for leaf quads. The default is true.
proland::TileSampler::setStoreParent indicates whether a tile must be produced or not for internal quads (i.e., non leaf quads). The default is true.
proland::TileSampler::setStoreInvisible indicates whether a tile must be produced or not for quads out of the view frustum. The default is true.
- proland::TileSampler::setStoreFilter adds an arbitrary tile filter to a list of filters. Each filter takes as argument a terrain quad, and returns whether or not a tile must be produced for it. If at least one filter decides that the tile must be produced, it will be produced.
Finally a TileSampler can be used in one of two modes: synchronous or asynchronous. In the default, synchronous mode, the update method uses an immediate deadline for the tasks needed to produce the tiles for the newly created quads. This means that the final frame will not be displayed until all the tiles are produced. When tile data must be loaded from disk, with a high latency, this can lead to visible freeze time between frames when the viewer moves.
This can be solved by using the asynchronous mode. In this mode the deadline for the tile production tasks is not set to the current frame. Thus a frame can be displayed even if some data is missing. In this case the first ancestor tile that is ready is used instead. This solves the latency problem, but degrades the quality when the viewer is moving fast, and gives visible popping artifacts when new data suddenly replaces the temporary low resolution data used while waiting it (this can even lead to gaps between terrain quads because then the quadtree used for display is not necessarily a restricted quadtree). In order to use this asynchronous mode, several options must be configured properly (the "earth-srtm-async" example illustrates this - note in particular the scheduler definition):
- proland::TileSampler::setStoreParent must be set to true. This is to ensure that we will find at least one ancestor whose data is ready when data for a tile is not yet ready.
- proland::TileSampler::setAsynchronous must be set to true.
- finally the ork::Scheduler used must support prefetching of any kind of tasks (both CPU and GPU). With a ork::MultithreadScheduler, this is only possible if a prefetch rate is specified, or if a fixed frame rate is specified.
- Note:
- You can mix TileSampler in synchronous mode with others using asynchronous mode. Hence some tile data can be produced synchronously while other data is produced asynchronously.
GLSL functions
As said above, the second role of a proland::TileSampler is to set GLSL uniforms allowing shaders to access a texture tile in the tile storage. This role is performed by the proland::TileSampler::setTile method, which takes as argument the logical coordinates of a tile. This method finds the location of this tile in the tile storage using findTile. It then sets the necessary GLSL uniforms to access the content of this storage slot from a shader. More precisely, if the requested tile is not found, its parent tile is looked for instead. If this parent tile is not found either, the parent of the parent tile is looked for, and so on until an ancestor of the requested tile is found. Then the necessary GLSL uniforms are set to allow shaders to access the sub part of the ancestor tile that corresponds to the requested tile.
In order to facilitate the use of texture tiles stored in tile storages, a tile storage is seen as a new kind of texture. By similarity with 1D, 2D, 2D array or 3D built-in textures, declared in GLSL with
sampler1D,
sampler2D,
sampler2DArray or
sampler3D, and used with the
texture1D(),
texture2D(),
texture2DArray(), or
texture3D() functions, we define a new
samplerTile type and a new
textureTile function for texture tiles stored in tile storages. Their definition is provided in the
textureTile.glsl file.
Hence the tiles produced by a GPU tile producer
p can be accessed as follows. We first create a
TileSampler using this producer:
Ptr<TileSampler> u = new TileSampler("mySamplerTile", p);
The name "mySamplerTile" is the name of the
samplerTile uniform that will be used in the shader to access the tiles. After the
update method has been called, and after the tasks it returned have been executed, we can set the value of the "mySamplerTile" uniform to a specific tile, in the currently selected GLSL program (see ork::SceneManager::getCurrentProgram), with:
u->setTile(level, tx, ty);
Note the analogy with uniforms:
Ptr<Uniform3f> v = new Uniform3f("myUniform"); v->set(vec3f(1.0, 0.0, 0.0));
We can then render the corresponding quad. In the GLSL code, the tile can be accessed as follows (see the "terrain1" example):
#include "textureTile.glsl" uniform samplerTile mySamplerTile; void main() { ... vec4 v = textureTile(mySamplerTile, uv); ... }
where the uv coordinates must vary between 0 and 1 in the quad (note that
textureTile does not sample the border of the tile, if any, but only the interior part: the [0..1] range is mapped to the interior part of the tile).
- Note:
- if the GPU storage uses textures in
NEARESTmode, you can still perform a linear interpolation, in the shader, by using
textureTileLinearinstead of
textureTile(this functions calls
textureTilefour times and interpolates the results).
Tile maps
With the above method a
samplerTile uniform can access only one tile at a time in a shader. You can of course declare several
samplerTile in your shader, in order to access several tiles (in the same storage or not) simultaneously. Still, you are limited to select a fixed number of tiles, draw the corresponding quad, select another set of tiles, draw the corresponding quad, and so on for all quads. However it is sometimes necessary to have access to all the tiles of a producer (or of several producers) simultaneously. This can be done with tile maps: a tile map is an indirection structure on GPU that indicates, for each tile, where it is stored in a tile storage. Since a storage can store the tiles of several producers, you can then have access to all the tiles of these producers.
Usage
A tile map is used via a
TileSampler. But it is important to know that a
TileSampler used to access a tile map does not ask its associated producer to produce new tiles when quads are subdivided. In other words it can only access tiles, it cannot produce them. Hence it is necessary to use a "companion"
TileSampler, without tile map but associated with a tile producer using the same tile storage, so that tiles can be effectively produced (in fact you can have several such "companion" samplers).
A normal
TileSampler can be changed to one used to access a tile map as follows:
the first step is to declare the tile map in the GPU producer associated with the
TileSampler, with the
tileMap="true"attribute (see GPUTileStorage).
- the terrain node with which the "companion" samplers are associated must be declared with proland::TileSampler::addTerrain.
The tile map can then be used by calling the proland::TileSampler::setTileMap method, before using the
textureQuadtree function in your shader (the "terrainShader.glsl" file in the "terrain5" example illustrates this):
#include "textureTile.glsl" uniform samplerTile myTiles; void main() { ... vec4 v = textureQuadtree(myTiles, xy, 0.0); ... }
This function takes as argument x,y physical coordinates (varying between -L/2 and L/2, where L is the terrain size, i.e., the root quad size - see Producer framework; the third argument, here 0.0, is the producer id). It first finds the logical coordinates of the leaf quad that contains this point, and then uses the tile map to find the storage slot containing the corresponding tile. It finally returns the content of this tile at the requested location.
Algorithm
The first step finds the logical coordinates (level,tx,ty) of the leaf quad q that contains the point p of physical coordinates (x,y), assuming that the quadtree of size L is subdivided using the split distance factor k>1, with a viewer at (cx,cy). Once level is known, finding tx and ty is trivial (indeed tx = ⌊ 2level (x/L+1/2) ⌋, and similarly for ty). So the main problem is to compute level.
Let's note d=max(|x-cx|,|y-cy|) the distance between p and the viewer, and dq the (unknown) distance between the quad q and the viewer. We have dq < d < dq + L/2level. By hypothesis q is not subdivided, which implies
By hypothesis again the parent quad of q, noted r, is subdivided, which implies dr < kL/2level-1. With d r < d < dr + L/2level-1, this gives
We can then consider two cases: d < kL/2level-1, or d > kL/2level-1. In the first case we get with the first relation kL/2level < d < kL/2level-1, which gives level = ⌊ 1 + ln2(kL/d) ⌋. In the second case we get with the second relation kL/2level-1 < d < (k+1)L/2level-1. After some rewriting, this gives 1 + ln2(kL/d) < level < 1 + ln2(kL/d) + ln 2(1+1/k) < 2 + ln2(kL/d). We conclude that, in both cases, ⌊ 1 + ln2(kL/d) ⌋ ≤ level ≤ ⌊ 2 + ln2(kL/d) ⌋. So we compute level as follows: we first compute l = ⌊ 1 + ln2(kL/d) ⌋, deduce tx and ty from that, and test if the distance dq for this quad is less than kL/2l or not. Depending on the result, we know that level is either l or l + 1.
Once we have the logical tile coordinates, the second step must find where this tile is stored in the tile storage. This is the role of the tile map, which stores for each tile its slot in the storage (if present in the storage). In fact this map cannot have one entry for each potential tile: for a quadtree depth of 16, there are more than 416 potential tiles, i.e., more than 4 billions entries! A solution is to store an encoding of the quadtree on GPU. But finding a tile would require a full tree traversal. Another solution is to use a hash table on GPU, but it would be difficult to avoid collisions to ensure a maximum efficiency. We use another solution, which ensures a constant time access (no tree traversal, no collisions, small memory requirements). We use the fact that, at each quadtree level, the number of leaf tiles that can exist simultaneously is bounded and independent of the level.
A tile (l,tx,ty) cannot exist if its parent tile is not subdivided. If the viewer is at (cx,cy), the parent tile containing the viewer, (l-1, ⌊ 2l-1(cx/L+1/2) ⌋, ⌊ 2 l-1(cy/L+1/2) ⌋) is subdivided. And all tiles of level l-1 at a distance less than kL/2l-1 are also subdivided. This gives at most ⌈ k ⌉ such tiles around the parent tile, i.e., at most (2 ⌈ k ⌉+1)2 tiles of level l-1. Hence there are at most (4 ⌈ k ⌉+2)2 leaf tiles of level l at the same time, whatever the value of l. For k<2, this gives a tile map of size 102.depth, e.g., 1600 entries for a maximum depth of 16 (instead of 4 billions!).
In summary the CPU updates the tile map texture (at each frame, depending on the current cx,cy value) by storing for each leaf quad (l,tx,ty) its slot in the storage, in the texel of index
where
iy = ty - 2 ⌊ 2l-1(cy/L+1/2) ⌋ + ⌈ k ⌉
On GPU, once the (l,tx,ty) coordinates corresponding to the physical coordinates (x,y) have been found, the index i is computed, the value of the tile map at this index is retrieved to get the slot position, and finally the texture tile in this slot is sampled to get the result. The "terrainShader.glsl" file in the "terrain5" example contains a concrete implementation of the above algorithm.
Texture tile sampler resource
A proland::TileSampler can be loaded with the Ork resource framework, using the following format (see the "terrain1" example):
<tileSampler sampler="mySamplerTile" producer="myProducer" storeLeaf="true" storeParent="false" storeInvisible="false"/>
The
sampler attribute specifies the name of the GLSL
samplerTile uniform that will be set by
setTile. The
producer attribute is the name of a GPU tile producer resource. The
storeLeaf,
storeParent and
storeInvisible attributes are options that specify when a tile must be produced for a given quad (see above). Using
tileSamplerZ instead of
tileSampler creates a sub class of
TileSampler that reads back the tile data on GPU, supposed to be elevation tiles, and uses this data to update the zmin and zmax fields of terrain quads (see Terrain classes), as well as the terrain height under the camera, in proland::TerrainNode::groundHeightAtCamera.
A proland::TileSampler to access a tile map can be loaded as follows (see the "terrain5" example):
<tileSampler sampler="mySamplerTile" producer="myProducer" terrains="myTerrain1,myTerrain2,myTerrain3"/>
where the
terrain attribute specifies the "companion" proland::TileSampler, indirectly via terrain node resources (you can specify at most 6 terrains).
Terrain tasks
Three ork::AbstractTask sub classes are provided to update a terrain node, to update a texture tile sampler, and finally to draw a terrain.
UpdateTerrainTask
The proland::UpdateTerrainTask simply calls the proland::TerrainNode::update method on a terrain. This updates the terrain quadtree, based on the new current camera position. A proland::UpdateTerrainTask can be created with the Ork resource framework, using the following format (the "helloworld" example illustrates this):
<updateTerrain name="this.terrain"/>
the
name attribute specifies the terrain node that must be updated.).
UpdateTileSamplersTask
The proland::UpdateTileSamplersTask simply calls the proland::UniformSamplerTask::update method on a set of texture tile samplers. This produces tiles for the new quads that appeared since the last execution of this task. A proland::UpdateTileSamplersTask can be created with the Ork resource framework, using the following format (the "terrain1" example illustrates this):
<updateUniform name="this.terrain"/>).
This task updates all the texture tile samplers that are associated with the scene node to which the Ork method that executes this task belongs. Indeed a scene node can have associated uniforms, including proland::TileSampler (a sub class of ork::Uniform).
DrawTerrainTask
The proland::DrawTerrainTask draws a mesh for each leaf quad of a terrain, using the currently selected program. Typically the mesh is a regular grid mesh, which is translated, scaled, and displaced by the GLSL program to draw each quad at the proper location in the terrain. Before drawing each quad, this task sets the uniforms that are necessary to deform the terrain quad with proland::Deformation::setUniforms. It also sets the uniforms necessary to access the tiles for this quad, using proland::TileSampler::setTile or proland::TileSampler::setTileMap (for each texture tile sampler associated with the scene node to which the Ork method that executes this task belongs).
A proland::DrawTerrainTask can be created with the Ork resource framework, using the following format:
<drawTerrain name="this.terrain" mesh="this.grid" culling="true"/>).
The
mesh attribute is the name of a mesh resource (see Meshes). It specifies the mesh that is used to draw each leaf quad. It can have the following form:
name.mesh: in this case the mesh is the mesh resource whose name is name.mesh.
this.name,
$v
.name, flag
.name: in this case the mesh is the mesh name of the target scene node
this,
$v or flag (see Methods).
The
culling attribute specifies if all the leaf quads must be drawn, or only those that are in the view frustum. The default value is false, meaning that all leaf quads are drawn.
User Interface
Proland's whole interface is based on EventHandlers. To be user-friendly and quickly usable, it provides the basics for navigating through the large scenes that you can display in Proland.
The UI is split in two parts: The handling of events (for navigation, edition, ...), and the TweakBars, used to provide a visual help for these options. TweakBars are also EventHandlers, which enables them to use the keyboard, mouse and OpenGL events.
Default Handlers
View Handlers
When using a library such as Proland, a user-friendly navigation system is mandatory. Proland provides such a system, called proland::BasicViewHandler. Its behavior is quite straight-forward: When no other EventHandler catches the keyboard/mouse events, it uses them for navigation. The default navigation system is the following: PageUp and PageDown to move forward and backward (Z axis). The mouse left-click moves the camera along X and Y axis, while the right-click makes the sun turn around the SceneNode. CTRL + click turns the camera. The mouse wheel is the same as PageUp and PageDown.
proland::BasicViewHandler requires a proland::BasicViewHandler::ViewManager in order to work properly. This ViewManager provides acces to a ork::SceneManager, to a proland::TerrainViewController and to the screen to world transformation. BasicViewHandler directly computes the new position at each frame, and sets it in the TerrainViewController's camera position.
proland::TerrainViewController controls the camera position and orientation. The default implementation uses a flat terrain as root node, but Proland provides implementations for planets and cylinders as well.
BasicViewHandler can be loaded in the Ork Resource framework:
<basicViewHandler name="myViewHandler" viewManager="myWindow" next="anOptionnalEventHandler"/>
viewManager: the proland::BasicViewHandler::ViewManager object that handles the Navigation UI.
next: an optional EventHandler that will receive the Events not captured by the view handler.
The "terrain3" example illustrates how this view handler can be used (especially when compared with the "terrain2" example) without Ork resources. The "ocean1" example illustrates how it can be used via Ork resources.
Event Recorder
The user might want to record a set of actions and to replay it later, or to create some videos of what he's doing in Proland. The proland::EventRecorder is able to do that. When pressing F12, it starts recording every events occuring until F12 is pressed again (all keyboard, mouse and OpenGL events are recorded together with the time at which they occured). Then, the user can replay them by pressing F11. When pressing Shift + F11, frames will be saved on the disk at a rate of 25 frames per second (using the original dates of each event, not the time during replay, which is perturbed by the time it takes to save frames on disk). This allows users to create a video afterwards. It is also able to save and load the recorded Events.
EventRecorder records the Events provided by a Recordable object. It works as a transparent layer on EventHandlers, i.e. it takes the place of an UI manager (a view handler for example), records all the events, and then passes them to the real UI manager, except when playing a video.
EventRecorder can be loaded in the Ork Resource framework:
<eventRecorder name="myEventRecorder" recorded="myWindow" videoDirectory="\home\myVideos\" cursorTexture"cursor" next="myBasicViewHandler"/>
recordedthe Recordable resource recorded by this EventRecorder.
videoDirectorythe file name format to be used to save the video frames.
cursorTexturea cursor texture to display the cursor position during replay.
nextthe EventHandler that must handle the events recorded and replayed by this EventRecorder.
TweakBars
Apart from the controls, the Graphical part of the UI is also important, and the user must be able to quickly use the interface without knowing all the hotkeys used in the program. Plus, the information must be clearly visible, and easily managed for the developper. Philippe Decaudin developped a toolbar framework that has those qualities: AntTweakBars. Proland's toolbar are based on this framework.
To avoid displaying too many toolbars on the screen, Proland contains a proland::TweakBarManager, able to add the content from any poland::TweakBarHandler, and enable/disable them. When deactivated, they can themselves disable their linked EventHandler, if any (an editor for example). The TweakBarHandlers can be of three types: permanent (will always be activated), exclusive (they can't be enabled in the same time as other exclusive handlers) or regular (can be enabled/disabled at will).
As for EventRecorder, TweakBarManager is a transparent layer on a given UI manager. It is an EventHandler, thus able to catch events and to pass them to its TweakBarHandlers. Those will then determine if anything should be changed in the data they display; If it did, the manager will recreate the tweakbar with updated content.
Once again, the TweakBarManager can be loaded with the Ork Resource framework:
<tweakBarManager name="myTweakBarManager" minimized="false" next="myViewManager"> <editor id="myEditor1" bar="myTweakBarEditor" exclusive="true" permanent="false" key="r"/> <\tweakBarManager>
minimized: Determines if the TweakBarManager starts minimized or not.
next: the EventHandler that must handle the unused events.
bar: a TweakBarHandler that will add its content to the TweakBarManager.
exclusive: Determines if the TweakBarHandler will be exclusive or not. Only one exclusive handler can be activated at the same time.
permanent: Determines if the TweakBarHandler can be disabled.
key: an optionnal hotkey to disable/enable the TweakBarHandler.
A few Tweakbars are available by default:
- proland::TweakResource: a flexible TweakBar directly described in the XML file, just like any other ork resource.
- proland::TweakSceneGraph: Enables to control the scene graph. Uses a proland::SceneVisitor object to browse the scene graph and display every nodes in the scene. Then, it allows the user to enable/disable almost any node.
- proland::TweakViewHandler: Controls a BasicViewHandler. Contains predefined positions accessible in one click. Also displays the current position.
The Proland examples illustrate how these tweak bars can be used (in particular the "edit1", "edit2", "edit3" and "edit4" examples). The figure below shows the interface of the proland::TweakSceneGraph: the tweak bar gives a tree representation of the scene graph, where each scene node can be expanded (allowing to show / hide this node, view its producers, invalidate their tiles, etc). This bar also gives a list of textures whose content can be displayed (on the right we see the texture used for the ortho producer tile cache). Finally it also shows statistics about the tile caches (capacity, number of tiles in use or not, etc).
| http://proland.imag.fr/doc/proland-4.0/core/html/index.html | CC-MAIN-2018-13 | refinedweb | 11,565 | 60.75 |
RAIL_IEEE802154_AddrConfig_t Struct Reference
A configuration structure for IEEE 802.15.4 Address Filtering.
#include <
rail_ieee802154.h>
A configuration structure for IEEE 802.15.4 Address Filtering.
The broadcast addresses are handled separately and do not need to be specified here. Any address to be ignored should be set with all bits high.
This structure allows configuration of multi
229 of file
rail_ieee802154.h.
Field Documentation
◆ longAddr
A 64-bit address for destination filtering.
All must be specified. This field is parsed in over-the-air (OTA) byte order. To disable a long address, set it to the reserved value of 0x00 00 00 00 00 00 00 00.
Definition at line
245 of file
rail_ieee802154.h.
◆ panId
PAN IDs for destination filtering.
All must be specified. To disable a PAN ID, set it to the broadcast value, 0xFFFF.
Definition at line
234 of file
rail_ieee802154.h.
◆ shortAddr
A short network addresses for destination filtering.
All must be specified. To disable a short address, set it to the broadcast value, 0xFFFF.
Definition at line
239 of file
rail_ieee802154.h.
The documentation for this struct was generated from the following file:
- protocol/ieee802154/
rail_ieee802154.h | https://docs.silabs.com/rail/2.8/struct-r-a-i-l-i-e-e-e802154-addr-config-t | CC-MAIN-2020-34 | refinedweb | 193 | 62.85 |
KmPlot
#include <function.h>
Detailed Description
This is the non-visual mathematical expression.
- Note
- when adding new member variables, make sure to update operator != and operator =.
Definition at line 238 of file function.h.
Member Enumeration Documentation
Definition at line 241 of file function.h.
Constructor & Destructor Documentation
Definition at line 278 of file function.cpp.
Definition at line 294 of file function.cpp.
Member Function Documentation
The full function expression, e.g.
"f(x,k)=(x+k)(x-k)".
Definition at line 301 of file function.h.
- Returns
- true if the fstr looks like "f(x) = ..."
- false if the fstr looks like "y = ..." (note that this depends on the type of equation, so if this is a Cartesian equation and the fstr looks like "a = ..." (not y) then it'll be considered a function, even if it isn't a very useful one.
Definition at line 343 of file function.cpp.
- Returns
- the name of the function, e.g. for the cartesian function f(x)=x^2, this would return "f".
Definition at line 317 of file function.cpp.
Definition at line 549 of file function.cpp.
Assigns the value in
other to this equation.
Definition at line 556 of file function.cpp.
- Returns
- the order of the differential equations.
Definition at line 299 of file function.cpp.
- Returns
- the name of the parameter variable (or a blank string if a parameter is not used).
Definition at line 466 of file function.cpp.
Definition at line 277 of file function.h.
- Returns
- the number of plus-minus symbols in the equation.
Definition at line 311 of file function.cpp.
The current plus-minus signature (true for plus, false for minus).
Definition at line 340 of file function.h.
- Parameters
-
- Returns
- whether
fstrcould be parsed correctly. Note that if it was not parsed correctly, then this will return false and this class will not be updated.
Definition at line 476 of file function.cpp.
- See also
- pmSignature.
Definition at line 542 of file function.cpp.
The type of function.
Definition at line 256 of file function.h.
Updates m_variables.
Definition at line 375 of file function.cpp.
- Returns
- whether the function accepts a parameter in addition to the x (and possibly y) variables.
Definition at line 292 of file function.h.
- Returns
- a list of variables, e.g. {x} for "f(x)=y", and {x,y,k} for "f(x,y,k)=(x+k)(y+k)".
Definition at line 287 of file function.h.
Member Data Documentation
For differential equations, all the states.
Definition at line 335 of file function.h.
Definition at line 354 of file function.h.
Definition at line 355 of file function.h.
Definition at line 356 of file function.h.
Definition at line 353 of file function.h.
Definition at line 352 of file function.h.
Cached list of variables.
Updated when setFstr is called.
Definition at line 360 of file function.h.
Pointer to the allocated memory for the tokens.
Definition at line 269 of file function.h.
Array index to the token.
Definition at line 273 of file function.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Fri Jan 17 2020 03:39:06 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdeedu-apidocs/kmplot/kmplot/html/classEquation.html | CC-MAIN-2020-05 | refinedweb | 563 | 62.95 |
Our volunteers haven't translated this article into Català yet. Join us and help get the job done!
You can also read the article in English (US). that and less than the length of the string; if it is not a number, it defaults to 0.
Return value
A number representing the UTF-16 code unit value of the character at the given index;
NaN if
index is out of range.
Description
Unicode code points range from 0 to 1114111 (0x10FFFF). The first 128 Unicode code points are a direct match of the ASCII character encoding. For information on Unicode, see the JavaScript Guide.
Note that
charCodeAt() will always return a value that is less than 65536. This is because the higher code points are represented by a pair of (lower valued) "surrogate" pseudo-characters which are used to comprise the real character. Because of this, in order to examine or reproduce the full character for individual characters of value 65536 and above, for such characters, it is necessary to retrieve not only
charCodeAt(i), but also
charCodeAt(i+1) (as if examining/reproducing a string with two letters), or to use codePointAt(i) instead. See example 2 and 3 below.
charCodeAt() returns
NaN if the given index is less than 0 or is equal to or greater than the length of the string.
Backward compatibility: In historic versions (like.
Examples
Using
charCodeAt()
The following example returns 65, the Unicode value for A.
'ABC'.charCodeAt(0); // returns 65); // false idx = idx || 0; var code = str.charCodeAt(idx); var hi, low; // High surrogate (could change last hex to 0xDB7F // to treat high private surrogates // as single characters) if (0xD800 <= code && code <= 0xDBFF) {; // hi = str.charCodeAt(idx - 1); // low = code; // return ((hi - 0xD800) * 0x400) + // (low - 0xDC00) + 0x10000; } return code; }; }
Specifications
Browser compatibility
The compatibility table in this page is generated from structured data. If you'd like to contribute to the data, please check out and send us a pull request.
Legend
- Full support
- Full support | https://developer.mozilla.org/ca/docs/Web/JavaScript/Reference/Global_Objects/String/charCodeAt | CC-MAIN-2019-26 | refinedweb | 334 | 63.7 |
Depict-It
Depict-It is a party game for 4 to 8 players (ideally!) where you mutate a phrase through drawings and captions, to make up funny scenarios with your friends.
The rules of the game
- The game is played in rounds.
- Each player is provided with a
Game Stackcontaining a
Captionand a blank screen for them to draw on.
- They have 180 seconds to draw a picture of what is described in the caption.
- Once either all players have finished, or 180 seconds elapse, each drawing is passed to the next player.
- Now each player writes a caption which describes the drawing presented to them.
- Once the first player has their own
Game Stackreturned to them the
Scoring phasebegins.
- During scoring, each progression from starting caption through drawings and descriptions is displayed. The players can vote on the funniest card in the progression.
- Points are awarded to each player based on the number of votes they've received and the
Hostcan start a new round.
About this document
If you're just interested in running this project on your own machine, or on Azure, scroll to the bottom of this document for instructions.
The rest of this readme is a teardown, and an explaination of how the game is made.
What are we going to build?
Depict-It is a progressive web app. It is built with JavaScript, Vue.js, HTML and CSS.
The game uses the Ably Basic Peer to Peer demo as a base and Ably Channels to send messages between players.
We'll be hosting the application on Azure Static Web Applications and we'll use Azure Blob Storage to store user generated content.
Dependencies
The app uses Vue.js and Ably.
A brief introduction to Vue.js.
- vue.js Github repo
Vue.js is a single-page app framework, and we will use it to build the UI of the app. The Vue code lives in index.js and handles all of the user interactions. We're using Vue because it doesn't require a toolchain and it provides simple binding syntax for updating the UI when data changes.
A Vue app looks a little like this abridged sample:
var app = new Vue({ el: '#app', data: { greeting: "hello world", displayGreeting: true, } methods: { doSomething: async function(evt) { ... } } });
It finds an element with the id of
app in the markup, and treats any elements within it as markup that can contain
Vue Directives - extra attributes to bind data and manipulate the HTML based on the application's state.
Typically, the Vue app makes the properties of the data object available to bind into your markup (such as
greeting in the above code snippet). When data changes, it'll re-render the parts of the UI that are bound to it.
Vue.js exposes a
methods property, which can be used to implement things like click handlers and callbacks from the UI, like the
doSomething function above.
This snippet of HTML should help illustrate how Vue if-statements and directives work:
<main id="app"> <div v- {{ greeting }} </div> </main>
Here you'll see Vue's
v-if directive, which means that the
div and its contents will only display if the
displayGreeting data property is true.
You can also see Vue's binding syntax, where we use
{{ greeting }} to bind data to the UI.
Ably Channels for pub/sub
The app uses Ably for pub/sub messaging between the players. Ably is an enterprise-ready pub/sub messaging platform that makes it easy to design, ship, and scale critical realtime functionality directly to your end-users.
Ably Channels are multicast (many publishers can publish to many subscribers) and we can use them to build peer-to-peer apps.
"Peer to peer" (p2p) is a term from distributed computing that describes any system where many participants, often referred to as "nodes", can participate in some form of collective communication. The idea of peer to peer was popularised in early file sharing networks, where users could connect to each other to exchange files, and search across all of the connected users. In this demo, we're going to build a simple app that will allow one of the peers to elect themselves to be the "leader", and co-ordinate communication between each instance of our app.
Ably channels and API keys
In order to run this app, you will need an Ably API key. If you are not already signed up, you can, we will use this later when we build our app.
This app is going to use Ably Channels and Token Authentication.
Making sure to send consistent messages by wrapping the Ably client
In PubSubClient.js we make a class called
PubSubClient - which adds metadata to messages sent outwards, so we don't have to remember to do it in the calling code.
class PubSubClient { constructor(onMessageReceivedCallback) { this.connected = false; this.onMessageReceivedCallback = onMessageReceivedCallback; }
First we define a
constructor for the class - and set up some values - a property called
connected, set to false, and
onMessageReceivedCallback - a function passed to the constructor that we will use later when Ably messages arrive.
Inside the
PubSubClient class, we define a
connect function:
async connect(identity, uniqueId) { if(this.connected) return; this.metadata = { uniqueId: uniqueId, ...identity }; const ably = new Ably.Realtime.Promise({ authUrl: '/api/createTokenRequest' }); this.channel = await ably.channels.get(`p2p-sample-${uniqueId}`); this.channel.subscribe((message) => { this.onMessageReceivedCallback(message.data, this.metadata); }); this.connected = true; }
While we're making a connection, we're subscribing to an Ably Channel and adding a callback function that passes on the
data property from the Ably message. The data property in the Ably message is the JSON that the
peers sent, along with some
identifying metadata. The
PubSubClient calls the callback function that we pass to its constructor with the data and the metadata we receive from Ably - in this case, the metadata would contain the
identity object with a unique ID and name for each player.
In the
PubSubClient we also define a
sendMessage function, that adds some functionality on top of the default
Ably publish.
sendMessage(message, targetClientId) { if (!this.connected) { throw "Client is not connected"; } message.metadata = this.metadata; message.forClientId = targetClientId ? targetClientId : null; this.channel.publish({ name: "myMessageName", data: message}); } }
This ensures that whenever
sendMessage is called, the data stored in
this.metadata that was set during construction, is included. We're also making sure that if the message is for a specific peer - set using
targetClientId - then this property is added to our message before we publish it on the Ably Channel.
The
PubSubClient is passed to the instances of our
P2PClient and
P2PServer classes, to make sure they publish messages in a predictable way.
Building a web app
The application is composed of a
Vue UI, and two main classes,
P2PClient and
P2PServer.
The
peer who elects themselves as host will be the only one to have an instance of
P2PServer and all of the
peers will be
P2PClients. When we define the Vue app, we create two
null properties, one for each of these things, inside
Vue data:
var app = new Vue({ el: '#app', data: { p2pClient: null, p2pServer: null, ...
When a Vue instance is created, it adds all the properties found in its data object to Vue’s reactivity system. When the values of those properties change, the view will “react”, updating to match the new values.
By defining both of the
p2pClient and
p2pServer properties inside of Vue's data object, they become reactive - any changes observed to the properties will cause the UI to re-render.
Our Vue app only contains two functions, one to start
hosting and the other to
join. In reality, they're both doing the same thing (connecting to an
Ably channel by name), but depending on which button is clicked in the UI, that
peer will either behave as a host or a client.
host: async function(evt) { evt.preventDefault(); const pubSubClient = new PubSubClient((message, metadata) => { handleMessagefromAbly(message, metadata, this.p2pClient, this.p2pServer); }); const identity = new Identity(this.friendlyName); this.p2pServer = new P2PServer(identity, this.uniqueId, pubSubClient); this.p2pClient = new P2PClient(identity, this.uniqueId, pubSubClient); await this.p2pServer.connect(); await this.p2pClient.connect(); },
The
host function creates an instance of the
PubSubClient and provides it with a callback to
handleMessageFromAbly. Afterwards, it:
- Creates a new
Identityinstance, using the
friendlyNamebound to our UI.
- Creates a new
P2PServer.
- Creates a new
P2PClient.
- Connects to each of them (which in turn, calls
connecton the
PubSubClientinstance).
Joining is very similar:
join: async function(evt) { evt.preventDefault(); const pubSubClient = new PubSubClient((message, metadata) => { handleMessagefromAbly(message, metadata, this.p2pClient, this.p2pServer); }); const identity = new Identity(this.friendlyName); this.p2pClient = new P2PClient(identity, this.uniqueId, pubSubClient); await this.p2pClient.connect(); }
Here, we're doing exactly the same as the host, except we're only creating a
P2PClient.
HandleMessageFromAbly
handleMessageFromAbly is the callback function that the
PubSubClient will trigger whenever a message is received on the Ably Channel.
function shouldHandleMessage(message, metadata) { return message.forClientId == null || !message.forClientId || (message.forClientId && message.forClientId === metadata.clientId); } function handleMessagefromAbly(message, metadata, p2pClient, p2pServer) { if (shouldHandleMessage(message, metadata)) { p2pServer?.onReceiveMessage(message); p2pClient?.onReceiveMessage(message); } }
handleMessageFromAbly is responsible for calling
onReceiveMessage on the instance of
P2PServer if the current player is the
host, and then calling
onReceivedMessage on the instance of
P2PClient.
If the received message has a property called
forClientId and it is not for the current client, the message will not be processed.
This is deliberately not secure. All the messages sent on our
Ably channel are multicast, and received by all peers, so it should not be considered tamper proof - but it does prevent us from having to filter inside of our client and server instances.
P2PClient
The
P2PClient class does most of the work in the app. It is responsible for sending a
connected message over the
PubSubClient when
connect is called, and most importantly, for keeping track of a copy of the
serverState whenever a message is received.
class P2PClient { constructor(identity, uniqueId, ably) { this.identity = identity; this.uniqueId = uniqueId; this.ably = ably; this.depictIt = null; this.serverState = null; this.countdownTimer = null; this.state = { status: "disconnected", instructionHistory: [], lastInstruction: null }; }
The
P2PClient constructor assigns its parameters to instance variables, and initializes a
null
this.serverState property, along with its own client state in
this.state.
We then go on to define the
connect function:
async connect() { await this.ably.connect(this.identity, this.uniqueId); this.ably.sendMessage({ kind: "connected" }); this.state.status = "awaiting-acknowledgement"; // this.depictIt = new DepictItClient(this.uniqueId, this.ably); }
This uses the provided
PubSubClient (here stored as the property
this.ably) to send a
connected message. The
PubSubClient is doing the rest of the work - adding in the
identity of the sender during the
sendMessage call. It also sets
this.state.status to
awaiting-acknowledgement - the default state for all of the client instances until the
P2PServer has sent them a
connection-acknowledged message.
OnReceiveMessage does a little more work:
onReceiveMessage(message) { if (message.serverState) { this.serverState = message.serverState; } switch (message.kind) { case "connection-acknowledged": this.state.status = "acknowledged"; break; /*case "instruction": this.state.instructionHistory.push(message); this.state.lastInstruction = message; break;*/ default: { }; } }
There are two things to pay close attention to here - firstly that we update the property
this.serverState whenever an incoming message has a property called
serverState on it. Clients use this to keep a local copy of whatever the
host says its state is, and we'll use this to bind to our UI later. Secondly, there is a switch on
message.kind - the type of message we're receiving. In this case, we only actually care about the
connection-acknowledged message, and updating the
this.state.status property to
acknowledged once we receive one.
There are a few commented lines in this code that we'll discuss later on.
P2PServer
The
P2PServer class hardly differs from the client. It contains a constructor that creates an empty
this.state object:
export class P2PServer { constructor(identity, uniqueId, ably) { this.identity = identity; this.uniqueId = uniqueId; this.ably = ably; // this.stateMachine = DepictIt({ channel: ably }); this.state = { players: [], hostIdentity: this.identity, started: false }; }
It also contains a connect function that connects to Ably via the
PubSubClient:
async connect() { await this.ably.connect(this.identity, this.uniqueId); }
And finally, it contains an
onReceiveMessage callback function that responds to the
connected message:
onReceiveMessage(message) { switch (message.kind) { case "connected": this.onClientConnected(message); break; default: { // this.stateMachine.handleInput(message); }; } }
All of the work is done in
onClientConnected:
onClientConnected(message) { this.state.players.push(message.metadata); this.ably.sendMessage({ kind: "connection-acknowledged", serverState: this.state }, message.metadata.clientId); this.ably.sendMessage({ kind: "game-state", serverState: this.state }); }
When a client connects, we keep track of their
metadata and then send two messages. The first message is a
connection-acknowledged message - that is sent specifically to the
clientId that just connected. The second is a
game-state message, with a copy of the latest
this.state object, that will in turn trigger all the clients to update their internal state.
There's a little more that happens in the server class (in the currently commented
stateMachine line), but let's talk about how our game logic works first. We'll revisit expanded versions of
P2PClient and
P2PServer later in this article.
Designing the game
The game plays out over messages between the
host and all of the
players.
We send messages from the
host to each individual client representing the next thing they have to do. The
game stacks (the piles of Depict-It cards), are stored in memory in the host's browser, with only the information required to display to each respective player sent in messages at any one time. This keeps our message payloads small and means we can structure the application in pairs of messages - requests for user input and their responses.
The game has five key phases:
- Dealing and setup
- Collecting image input from players (repeats until game end)
- Collecting text captions from players (repeats until game end)
- Collecting scores from players
- Displaying scores
Each of these phases is driven by pairs of messages.
We store a variable called
lastMessage inside the
P2P client. This allows us to make the UI respond to the contents of this last message. This is a simple way to control what is shown on each player's screen.
We'll use a message type called
wait to place players in a holding page while other players complete their inputs.
Here are the messages used in each phase of the game:
Each of these messages is sent through the
PubSubClient class, adds some identifying information (the id of the player that sent each message) into the message body for us to filter by in the code.
As our game runs, and sends these messages to each individual client, it can collect their responses and move the
game state forwards.
Luckily, there isn't very much logic in the game, it only has to:
- Ensure that when a player sends a response to a request, it is placed on the correct
game stackof items.
- Keep track of scores when players vote on items.
- Keep track of which stack each player is currently holding.
We need to write some code for each of the game phases to send these
p2p messages at the right time, and then, build a web UI that responds to the last message received to add a gameplay experience.
We're going to use a software pattern called a
State Machine - a way to model a system that can exist in one of several known states, to run the game logic.
The GameStateMachine
Next we'll write code to capture the logic of the game. We're going to break the phases of the game up into different
Handlers - that represent both the logic of that portion of the game, and the logic that handles user input during that specific game phase.
Our implementation is part state machine, part command pattern handler.
Let's take a look at what state machine code can look like - here's a two-step game definition, taken from one of our unit tests:
const twoStepGame = () => ({ steps: { "StartHandler": { execute: async function (state) { state.executeCalled = true; return { transitionTo: "EndHandler" }; } }, "EndHandler": { execute: async function (state) { } } } });
This game definition doesn't do anything on its own - it's a collection of
steps. This example shows a start handler that just flags that execute has been called, and then
transitionTos the
EndHandler.
Defining a game
A game definition looks like this:
const gameDef = () => ({ steps: { "StartHandler": { ... }, "EndHandler": { ... } }, context: { some: "object" } });
- Steps must be named.
- Steps must contain
StartHandlerand
EndHandler.
- Properties assigned to the
stateobject during
handleInputcan be read in the
executefunction.
contextcan be provided, and can contain anything you like to make your game work.
Defining a handler
Here's one of the handlers from the previous example:
{ execute: async function (state, context) { await waitUntil(() => state.gotInput == true, 5_000); return { transitionTo: "EndHandler" }; }, handleInput: async function(state, context, input) { state.gotInput = true; } }
This is an exhaustive example, with both an
execute and a
handleInput function, though only
execute is required.
- Handlers must contain an
executefunction.
- Handlers can contain a
handleInputfunction.
- Handlers can call
waitUntil(() => some-condition-here);to pause execution while waiting for input.
handleInputcan be called multiple times.
waitUntilcan be given a
timeoutin
milliseconds.
contextwill be passed to the
executeand
handleInputfunctions every time they are called by the
GameStateMachine.
- Handlers must return a
transitionToresponse from their
executefunction, that refers to the next
Handler.
- Handlers must be
async functions.
How the GameStateMachine works
The
GameStateMachine takes a
Game Definition - comprised of
steps and an optional
context object, and manages which steps are executed and when. It always expects a game to have a
StartHandler and an
EndHandler - as it uses those strings to know which game steps to start and end on.
Create a new instance of a game by doing something like this:
const game = new GameStateMachine({ steps: { "StartHandler": { ... }, "EndHandler": { ... } }, context: { some: "object" } });
Then, when you have a
game object, you can call
game.run(); to start processing the game logic at the
StartHandler.
What is the GameStateMachine doing?
The constructor for the
GameStateMachine takes the
steps and the
context and saves them inside itself.
Once that's done, the
run function does all the hard work.
async run() { console.log("Invoking run()", this.currentStepKey); this.trackMilliseconds(); const currentStep = this.currentStep(); const response = await currentStep.execute(this.state, this.context); if (this.currentStepKey == "EndHandler" && (response == null || response.complete)) { return; // State machine exit signal } if (response == null) { throw "You must return a response from your execute functions so we know where to redirect to."; } this.currentStepKey = response.transitionTo; this.run(); }
The state machine:
- Keeps track of the
currentStepKey- this is the string that you use to define your
stepsin the
game definition.
- Keeps track of time.
- Awaits the
executefunction of the
StartHandler.
- Evaluates the response.
Once a response from the current handler has been received:
- If the
currentStepKeyis
EndHandlerthen
return- the game has concluded.
- Otherwise, update the
currentStepKeyto be the target of the
transitionToresponse - changing the current active state of the game.
- Call
runagain, to process the step we've just arrived at.
This flow of moving between game steps based on the outcome of the current step allows us to define all kinds of games!
The state machine contains a
handleInput function:
async handleInput(input) { const currentStep = this.currentStep(); if (currentStep.handleInput) { currentStep.handleInput(this.state, this.context, input); } else { console.log("Input received while no handler was available."); } }
We pass user input to this function and it will find the currently active step, and forward the input onto the relevant
handleInput function defined in it. This means that if any of our steps require user input, the input will be passed through this function.
We can connect this up to our Web UI and Ably connection later.
The GameStateMachine and our game
Inside /app/js/game/ there are a series of files. The ones with
DepictIt in the filename contain the game logic.
DepictIt.js DepictIt.cards.js DepictIt.handlers.js DepictIt.types.js GameStateMachine.js
DepictIt.js is the entrypoint, and references all of the game handlers, returning the
Game Definition needed to create a game:
export const DepictIt = (handlerContext) => new GameStateMachine({ steps: { "StartHandler": new StartHandler(), "DealHandler": new DealHandler(), "GetUserDrawingHandler": new GetUserDrawingHandler(180_000), "GetUserCaptionHandler": new GetUserCaptionHandler(60_000), "PassStacksAroundHandler": new PassStacksAroundHandler(), "GetUserScoresHandler": new GetUserScoresHandler(), "EndHandler": new EndHandler() }, context: handlerContext });
DepictIt is a function because we're going to pass in an Ably connection inside the
handlerContext parameter, but it returns a fully created
GameStateMachine instance to run in the Vue.js app. The game is defined as a series of handlers in the sample above. Each of these game handlers are imported from the DepictIt.handlers.js file.
Each
Handler has access to an
ably client supplied as a property called
channel in a
context object. The game works by having the hosting player's browser keep track of where all the
game hands are, sending players p2p messages to make the client code in their browsers prompt the players for input.
Each of these messages looks similar:
context.channel.sendMessage({ kind: "instruction", type: "drawing-request", value: lastItem.value, timeout: this.waitForUsersFor }, player.clientId);
They each contain a property called
kind with a value of
instruction, which allows the clients to process these messages differently to the standard
connection messages. They also each have a
type - which varies depending on which phase of the game is currently being played.
Handlers control which message
types the players send. Additionally, messages will always contain a
value.
This
value, when in the drawing phase of the game, is going to be the
prompt the player is using to draw from. If we're in the
captioning phase of the game, it'll contain the URL of the image they need to caption so our player's browser can render it in the UI.
Messages can also feature an optional
timeout value (some of the steps have a limit on the length of time they'll wait for users to reply with a drawing or caption), so including this
timeout in the
instruction means we can render a timer bar on the client side.
Let's now dive into a few of our steps and take a look at what they do.
StartHandler
On
execute:
- Creates prompt deck imported from DepictIt.cards.js.
- Shuffles deck.
- Transitions to
DealHandler.
On
handleInput:
- There is no user input.
DealHandler
On
execute:
- Creates
Game Stackfor every player in
state.players.
- Adds prompt to the top of the
Game Stack.
- Transitions to
GetUserDrawingHandler.
On
handleInput:
- There is no user input.
GetUserDrawingHandler
On
execute:
- Sends
drawing-requestfor every player in
state.players.
- Request contains
promptfrom the top of that players
Game Stack.
- Waits for players to respond, or for 180 seconds to elapse.
- Adds placeholder images to
Game Stackif players do not respond.
- Transitions to
PassStacksAroundHandler.
On
handleInput:
- Handler expects a
urlproperty in the player response message.
urlpoints to image stored somewhere publically accessible. (We're going to use
Azure storage bucketsfor this later on.)
- When player input is received, an
instructionis sent to the player, prompting them to
wait.
GetUserCaptionHandler
On execute:
- Sends
caption-requestfor every player in
state.players.
- Request contains
urlfrom the top of that players
Game Stack.
- Waits for players to respond, or for 60 seconds to elapse.
- Adds "Answer not submitted" to
Game Stackif players do not respond.
- Transitions to
PassStacksAroundHandler.
On handleInput:
- Handler expects a
captionproperty in the player response message.
- When player input is received, an
instructionis sent to the player, prompting them to
wait.
PassStacksAroundHandler
On
execute:
- Moves the
Game Stacksforward to the next player that is required to contribute.
- If the
Game Stackshave been moved to their original owner, transitions to
GetUserScoresHandler.
- Otherwise, picks either
GetUserDrawingHandleror
GetUserCaptionHandler.
- Picks
GetUserDrawingHandlerwhen the top item in the
Game Stackis a
Caption.
- Picks
GetUserCaptionHandlerwhen the top item in the
Game Stackis a
Drawing.
GetUserScoresHandler
On
execute:
- Sends a
pick-one-requestfor each
Game Stack.
- Waits for all players to submit a score for that specific
Game Stack.
- Sends the next
pick-one-requestuntil all
Game Stackshave been scored.
On
handleInput:
- Assigns a vote to the author of each picked
Game Stack Item.
- Handles admin input to progress the game forward and skip the user scoring, to prevent games hanging.
EndHandler
On
execute:
- Sends a
show-scoresmessage with the final scores of the
Game round.
Handlers and async / await
The interesting thing about these handlers is that we're using
async/await and an unresolved Promise to pause the execution while we wait for user input. This allows us to represent the game's control flow linearly while waiting for messages to arrive over the
p2p channel.
GetUserDrawingHandler is an example of this linear flow: First we set up an
execute method, creating an instance variable called
submitted (scoped to
this). We know that when the number of
submitted drawings is equal to the total number of
players, every player has sent an image.
async execute(state, context) { this.submitted = 0; ...
Next, we send an
instruction to each player, in this case a
drawing-request
for (let player of state.players) { ... context.channel.sendMessage({ kind: "instruction", type: "drawing-request", ...); }
Then we begin waiting for responses.
We use the syntax
await waitUntil(() => some condition) to do this.
const result = { transitionTo: "PassStacksAroundHandler" }; try { await waitUntil(() => this.submitted == state.players.length, this.waitForUsersFor); } catch (exception) { result.error = true; ... /* error handling */ } return result; }
This creates an unresolved Promise that is polling in the background and executing the function passed to it. When that function returns
true, the execution will continue, and the
Promise will resolve.
While the code is paused here, awaiting the unresolved promise, messages sent via the Ably channel will be passed to the
handleInput function of this specific handler.
async handleInput(state, context, message) { if (message.kind == "drawing-response") { const stackItem = new StackItem("image", message.imageUrl); const stack = state.stacks.filter(s => s.heldBy == message.metadata.clientId)[0]; stack.add({ ...stackItem, author: message.metadata.clientId, id: createId() }); context.channel.sendMessage({ kind: "instruction", type: "wait" }, message.metadata.clientId); this.submitted++; } }
The input handler increments
this.submitted each time it receives a message from Ably. Each time the
waitUntil condition runs, it checks what the current value of
this.submitted is. Eventually, enough messages will be received for the promise to resolve.
The
waitUntil call also takes a timeout value - in this example it's the instance variable
this.waitForUsersFor which is provided in the constructor. If the callback condition hasn't been reached by the moment the timer reaches this timeout value, the
Promise will be rejected, and an
exception will be thrown. This means that we can do things like handling a player taking too long to draw a picture by submitting a default image.
The game UI
We'll now go over the basics of the Vue app, the
P2PClient, and how the
GameStateMachine orchestrates the gameplay.
The Vue app is split out into
Vue Components. Each component will respond to a specific
Game State Instruction message. The
Game State Machine will forward on messages received from Ably to the Vue app, so that the
Game Handlers can respond and update the UI accordingly. We'll use an HTML canvas to present the players with a way of drawing on the screen with a mouse (or fingers/pointer on touch screen devices) and capturing their input.
Building the UI with Vue
The UI markup is deceptively simple at the top level, because we use
Vue Components for all of the game phases.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Depict-it</title> <meta ...> <script src="//cdn.ably.io/lib/ably-1.js" defer></script> <script src="//cdn.jsdelivr.net/npm/vue/dist/vue.js"></script> <script src="/index.js" type="module"></script> <link href="" rel="stylesheet" /> <link href="/style.css" rel="stylesheet" /> </head>
In the HTML head we reference the Ably JavaScript SDK, along with the Vue.js library.
We also reference the
Index.js file as a module - this means we can use native browser
import and
export module syntax. Finally, we reference a Google Gont and
style.css which contain all the styles for the UI.
The game UI is defined within the HTML
main element:
<div v- <create-game-form v-on:</create-game-form> </div>
First, we show the form that players will use to create a new game - the
CreateGameForm. This is imported from CreateGameForm.js. Using Vue's conditional rendering we can make this element show only when a player hasn't yet hosted or joined a game. Once a game has been joined or hosted, we then show the
activeGame portion of the app, which contains the components that trigger when the game is running. This is split into several parts. The first part is called the
game-lobby:
<div v-else <div class="game-lobby" v- <invite-link :</invite-link> <connected-players-summary :</connected-players-summary> <ready-or-waiting-prompt : </ready-or-waiting-prompt> </div> <div v- <loader></loader> </div>
The
game-lobby references components to render invite links, connected players and prompt cards.
There is also a
timer-bar component that binds to any
timeouts sent from the
host:
<timer-bar </timer-bar>
Finally, we have the markup components for the different game state instructions sent from the
host. You'll spot familiar names here as they line up with the handlers that were defined earlier - one component per game phase.
<div v- <playfield-wait-for-others :</playfield-wait-for-others> <playfield-drawing :</playfield-drawing> <playfield-caption :</playfield-caption> <playfield-pick-one :</playfield-pick-one> <playfield-show-scores :</playfield-show-scores> </div> </div> </main> </body> </html>
Throughout the markup we're using the Vue.js syntax
:state= and
:is-host. These attributes are Vue bindings that pass values from our main Vue app down to the Vue components so that the components can use them. Likewise, the
v-on: event handlers bind functions in the Vue app to
events that the components can raise.
We touched on the layout of our
index.js file briefly at the start of this README, but let's take a look at it here in full.
export var app = new Vue({ el: '#app', data: { p2pClient: null, p2pServer: null, gameId: null, friendlyName: null, },
First we define some data properties.
Vue makes these properties
observable so we can bind them to the UI (whenever anything changes in these properties, the
UI will update).
Next we define some computed properties to make the HTML binding code more succinct:
computed: { state: function () { return this.p2pClient?.state; }, transmittedServerState: function () { return this.p2pClient?.serverState; }, joinedOrHosting: function () { return this.p2pClient != null || this.p2pServer != null; }, isHost: function () { return this.p2pServer != null; }, hasMessage: function () { return this.message != null; }, gameCanBeStarted: function () { return this.transmittedServerState && !this.transmittedServerState.started }, depictItClient: function () { return this.p2pClient?.depictIt; } },
These computed properties are used in our markup above. Where
depictItClient is used in the HTML, it is this computed property that is being referenced. Anything you bind to either needs to be in your
Vue data or your
Vue computed properties.
Finally we define our
host and
join methods to bind to clicks, along with
startGame and
nextRound to bind to events emitted by our
Vue components.
methods: { host:pServer = new P2PServer(identity, this.gameId, pubSubClient); this.p2pClient = new P2PClient(identity, this.gameId, pubSubClient); await this.p2pServer.connect(); await this.p2pClient.connect(); }, join:pClient = new P2PClient(identity, this.gameId, pubSubClient); await this.p2pClient.connect(); }, startGame: async function (evt) { this.p2pServer?.startGame(); }, nextRound: async function (evt) { this.p2pServer?.nextRound(); } } });
This is the entire outline of the top level of the app, most of the display logic is hidden in the
Vue components.
Remember, when a user joins or hosts a game, a
P2PClient or
P2PServer instance is created, and the state managed inside of them becomes observable, so we can bind any properties on these objects into the app.
At the bottom of
Index.js is also a
handleMessagefromAbly function that passes messages received over the
P2P channel onto the
P2PServer and
P2PClient instances. Let's take a quick look inside those classes again to see how this all works.
Inside P2PServer
P2PServer is really where most of the game is managed.
When the host creates a game, a new instance of
P2PServer is created. In turn, this creates an instance of the
GameStateMachine with an empty
this.state object.
import { DepictIt } from "../game/DepictIt.js"; export class P2PServer { constructor(identity, uniqueId, ably) { this.identity = identity; this.uniqueId = uniqueId; this.ably = ably; this.stateMachine = DepictIt({ channel: ably }); this.state = { players: [], hostIdentity: this.identity, started: false }; }
Notice that when we call the imported
DepictIt function, we're passing a
context object with the property
channel set to the
ably parameter. By providing this channel to the
GameStateMachine instance, we're making sure that every time one of our
Handlers executes, it has access to the Ably channel, and can use it to send
p2p messages to all the other clients.
Next, we're going to define
connect and
startGame:
async connect() { await this.ably.connect(this.identity, this.uniqueId); } async startGame() { this.state.started = true; this.ably.sendMessage({ kind: "game-start", serverState: this.state }); this.stateMachine.state.players = this.state.players; this.stateMachine.run(); }
These two functions are bound into our
Vue components, and both assign the currently connected players to the
stateMachine.state property, and trigger
stateMachine.run();, which starts the game of Depict-It.
nextRound is used to progress the game - it is a function that calls
resetCurrentStepKeepingState on our
stateMachine before invoking
run again - a function that moves the current handler back to the start without clearing player scores.
async nextRound() { this.stateMachine.resetCurrentStepKeepingState(); this.stateMachine.run(); }
And finally, let's take a look at the vitally important
onReceiveMessage handler:
onReceiveMessage(message) { switch (message.kind) { case "connected": this.onClientConnected(message); break; default: { this.stateMachine.handleInput(message); }; } } onClientConnected(message) { this.state.players.push(message.metadata); this.ably.sendMessage({ kind: "connection-acknowledged", serverState: this.state }, message.metadata.clientId); this.ably.sendMessage({ kind: "game-state", serverState: this.state }); } }
You can see that in this version of the handler, we treat clients connecting as a special case, by replying with
connection-acknowledged. We use this to update our
connected status in the debug UI element.
Most importantly however, is that any other messages are passed to the
this.stateMachine.handleInput() function.
What we're doing here is delegating responsibility for processing our messages to whichever
handler is currently active, routed via the
GameStateMachine instance. This is the glue that takes a message received by our
Ably connection, and passes it through our
GameStateMachine to the currently active
Handler.
Inside P2PClient
The
P2PClient that gets created when anyone joins a game follows the same general pattern as the
P2PServer.
First we have a constructor that creates some state, and a few properties that our game is going to use:
depictIt is going to store a client wrapper that we'll later bind into some
Vue components, and the
this.state object contains both an
instructionHistory as well as a
lastInstruction for us to track all the messages this client has received from the
host.
import { DepictItClient } from "../game/DepictIt.js"; export class P2PClient { constructor(identity, uniqueId, ably) { this.identity = identity; this.uniqueId = uniqueId; this.ably = ably; this.depictIt = null; this.serverState = null; this.state = { status: "disconnected", instructionHistory: [], lastInstruction: null }; }
Next, much like in
P2PServer, we define a
connect function that sends a message to the
host and waits for
acknowledgement. We're also creating an instance of
DepictItClient - a class that offers a function for each response the
client has to send back to the
host. We bind this to the
Vue components so they can reply to the
host when they have player input.
async connect() { await this.ably.connect(this.identity, this.uniqueId); this.ably.sendMessage({ kind: "connected" }); this.state.status = "awaiting-acknowledgement"; this.depictIt = new DepictItClient(this.uniqueId, this.ably); }
And finally we have the
onReceiveMessage function.
The most important piece here, is that when the
P2Pclient receives a message with a
kind of
instruction, it'll store a copy of it into the
instructionHistoryand will assign the most recent message to the property
this.lastInstruction.
onReceiveMessage(message) { if (message.serverState) { this.serverState = message.serverState; } switch (message.kind) { case "connection-acknowledged": this.state.status = "acknowledged"; break; case "instruction": this.state.instructionHistory.push(message); this.state.lastInstruction = message; break; default: { }; } } }
Practically all of the UI is going to be bound-up to the values in the
lastInstruction property. It is the most important piece of data in the entire application.
Splitting our game phases into Vue components
Vue components let us split out parts of the functionality into what looks like separate Vue apps. They follow practically the same syntax, but contain both the UI template and the JavaScript.
The Depict-It app is split into a bunch of smaller components:
base-components/CopyableTextBox.js base-components/DrawableCanvas.js ConnectedPlayersSummary.js CreateGameForm.js InviteLink.js Loader.js PlayfieldCaption.js PlayfieldDrawing.js PlayfieldPickOne.js PlayfieldShowScores.js PlayfieldWaitForOthers.js ReadyOrWaitingPrompt.js StackItem.js TimerBar.js
Keeping to a sensible convention, the component name has the phase of the game it's associated with in the filename.
Let's look inside
StackItem as an example, since it is a simple component:
export const StackItem = { props: ['item'], methods: { emitIdOfClickedElement: async function () { this.$emit('click', this.item.id); } }, template: ` <span v-{{ item.value }}</span> <img v-else v-bind: ` };
Vue Components:
- Can have named
propsthat you can bind in
HTMLusing the
:prop-name="something"syntax.
- Can have methods.
- Can have computed properties.
- Have a template string.
In the example of the
StackItem, there is a
v-if and
v-else statement displaying a
span if the item is a
string (a caption), or an
img tag when the item is a
drawing.
All of the components follow a similar pattern - capturing bits of interaction.
The other piece of syntax you can see here is the
this.$emit function call.
Doing this allows us to define custom events that can be bound in the consuming
Vue component or
Vue app - so if we emit an event, in the parent we can use the
v-on syntax to listen and respond to it. In this case, we're creating an event called
click, and passing the
item.id of the selected
Stack Item to subscribers of that event.
Let's now take a look at the
PlayfieldDrawing component to see how we handle sending data back from the server.
export const PlayfieldDrawing = { props: ['state', 'client'], methods: { sendImage: async function (base64EncodedImage) { await this.client.sendImage(base64EncodedImage); } }, template: ` <section v- <div class="drawing-prompt"> <div class="prompt-front">Draw This</div> <div class="prompt-back"> {{ state.lastInstruction.value }} </div> </div> <drawable-canvas v-on:</drawable-canvas> </section> ` };
This
Vue component is typical of the others that require interactivity. Remember when we walked through
P2PClient and we created an instance of
DepictItClient? We've bound that client into the
Vue component as the property
client. What this means, is that when our
sendImage function is triggered by the
DrawableCanvas raising the
drawing-finished event, we can use that client to send an image back to the
host.
This general pattern holds for collecting captions and scoring our game.
The DepictIt Client
Our
DepictIt Client is a small wrapper class around all of our components' interactions with the
host.
This client is responsible for sending data and little else.
All those messages that are expected? They're all defined here.
export class DepictItClient { constructor(gameId, channel) { this.gameId = gameId; this.channel = channel; } async sendImage(base64EncodedImage) { ... } async sendCaption(caption) { this.channel.sendMessage({ kind: "caption-response", caption: caption }); } async logVote(id) { this.channel.sendMessage({ kind: "pick-one-response", id: id }); } async hostProgressedVote() { this.channel.sendMessage({ kind: "skip-scoring-forwards" }) } }
There is one extra interesting function in here though -
sendImage.
async sendImage(base64EncodedImage) { const result = await fetch("/api/storeImage", { method: "POST", body: JSON.stringify({ gameId: this.gameId, imageData: base64EncodedImage }) }); const savedUrl = await result.json(); this.channel.sendMessage({ kind: "drawing-response", imageUrl: savedUrl.url }); }
sendImage has to POST the
base64EncodedImage that's created by our
DrawableCanvas component to an
API running on our instance of
Azure Functions before it sends a message back to the
host.
Storing images into Azure Blob Storage via an Azure Function
To make our images work, we've added an extra function to the directory /api/storeImage/index.js:
const { StorageSharedKeyCredential } = require("@azure/storage-blob"); const { BlobServiceClient } = require("@azure/storage-blob");) { const defaultAzureCredential = new StorageSharedKeyCredential(process.env.AZURE_ACCOUNT, process.env.AZURE_KEY); const blobServiceClient = new BlobServiceClient(process.env.AZURE_BLOBSTORAGE, defaultAzureCredential); const containerClient = blobServiceClient.getContainerClient(process.env.AZURE_CONTAINERNAME); const unique = `game_${req.body.gameId}_${uuidv4()}.png`; const url = `${process.env.AZURE_BLOBSTORAGE}/${process.env.AZURE_CONTAINERNAME}/${unique}`; const fileData = req.body.imageData.replace(/^data:image\/\w+;base64,/, ""); const buffer = new Buffer(fileData, 'base64'); const blockBlobClient = containerClient.getBlockBlobClient(unique); const uploadBlobResponse = await blockBlobClient.upload(buffer, buffer.length || 0); context.res = { headers: { "content-type": "application/json" }, body: { url: url } }; };
This you may recognize as boiler-plate, it's the standard
Azure Blob Storage SDK code to upload a file to a storage bucket. This
Azure function is mounted by the
Azure functions runtime to the path
/api/storeImage so we can call it using our browser's
Fetch API.
The function returns an
absolute url of the stored image - which is stored in a bucket that supports
unauthenticated reads.
The bucket is also configured to auto-delete items after 24-hours to keep our storage costs really low.
This is a super quick way to add a little bit of statefulness to our app - especially because the average size of our images is over the message size cap for
Ably messages.
Drawing using HTML5 Canvas
We have a
Vue component that we use to handle drawing with a mouse, or "finger painting" on touch devices.
export const DrawableCanvas = { ... mounted: function () { const element = document.getElementById(this.canvasId); if (element && !this.canvas) { this.canvas = new DrawableCanvasElement(this.canvasId).registerPaletteElements(this.paletteId); } }, ... template: ` <div class="drawable-canvas"> <div class="canvas-and-paints"> <canvas v-bind:</canvas> <div v-bind: <div style="background-color: black;" v-on:</div> <div style="background-color: red;" v-on:</div> <div style="background-color: green;" v-on:</div> <div style="background-color: blue;" v-on:</div> <div style="background-color: white;" v-on:</div> </div> </div> <button v-on:I'm finished!</button> </div>` };
There are two important things about this
component. Firstly, we use the class
DrawableCanvasElement from the npm package
@snakemode/snake-canvas (This package was built while writing this game).
We emit an event called
drawing-finished when the user clicks the I'm finished button in the template. This event is listened to in the consuming
Vue component - in this case, the
PlayfieldDrawing component that deals with the drawing phase of the game. As an event, we pass the result of the function call
canvas.toString() - this is a thin wrapper around the native browser call to convert a HTML Canvas element to a base64-encoded PNG. The consuming component then uses this to upload images to our
Azure Blob Storage account.
How does the drawing canvas work?
There's not too much to the drawing canvas - it takes an element Id (that it presumes is on a HTML Canvas element), and adds some click handlers on mouse up/down/move. Whenever the mouse is moved, a line between the last position and the current one is drawn, and a 1px blur applied to smooth out the aliasing in the image.
You might notice that we're also calling the function
registerPaletteElements. This adds a click handler to each child element of the passed in Id (the palette elements). When they are clicked, the active colour is set to the background colour of the clicked palette.
This means we can add and remove colours to our drawable canvas at will.
Touch support
The canvas also has touch support - we have to do a little bit of maths to make sure we're using the correct x and y coordinates in our canvas to support both mouse and touch. Multi-touch isn't supported.
getLocationFrom(e) { const location = { x: 0, y: 0 }; if (e.constructor.name === "TouchEvent") { const bounds = e.target.getBoundingClientRect(); const touch = e.targetTouches[0]; location.x = touch.clientX - bounds.left; location.y = touch.clientY - bounds.top; } else { location.x = e.offsetX; location.y = e.offsetY; } return location; }
This function is used to work out exactly where the player is drawing on our canvas - using either the mouse position, or the position of the first touch event.
Recap
We've spoken at length about how the core pieces of this game hang together.
If you want a deeper understanding, the code is all here, and you can run it locally by pulling this repo, and executing
npm run start, once you've added API keys for Ably, and Azure Blob Storage into the
/api/local.settings.json file.
Running on your machine
While this whole application runs inside a browser, we need some kind of backend to keep our
Ably API key safe. The running version of this app is hosted on
Azure Static Web Apps (preview) and provides us a
serverless function that we can use to implement Ably Token Authentication.
We need to keep the
Ably API key on the server side, so people can't grab it and eat up your usage quota. The client side SDK knows how to request a temporary key from an API call, we just need something to host it. In the
api directory, there's code for an
Azure Functions API that implements this
Token Authentication behaviour.
Azure Static Web Apps automatically hosts this API for us, because there are a few .json files in the right places that it's looking for and understands. To have this same experience locally, we'll need to use the Azure Functions Core Tools.
Local dev pre-requirements
We use live-server to serve our static files, and Azure functions for interactivity
npm install -g live-server npm install -g azure-functions-core-tools
To set your API key for local development:
cd api func settings add ABLY_API_KEY Your-Ably-Api-Key
Running this command will encrypt your API key into the file
/api/local.settings.json. You don't need to check it in to source control, even if you do, it won't be usable on another machine.
Next you need to Create an Azure Blob Storage Account, create a container, a storage bucket, and generate an API key.
Please refer to the Azure documentation for this. Once you know all your Azure configuration, you can either edit your
local.settings.json file by hand, or add to it using the
func command as demonstrated above. You'll need to add the following keys:
AZURE_ACCOUNT AZURE_CONTAINERNAME AZURE_BLOBSTORAGE AZURE_KEY
Here is an example of an unencrypted local.settings.json file:
{ "IsEncrypted": false, "Values": { "ABLY_API_KEY": "ably-api-key-here", "AZURE_ACCOUNT": "scrawlimages", "AZURE_CONTAINERNAME": "gameimages", "AZURE_BLOBSTORAGE": "", "AZURE_KEY": "some-azure-access-token-from-the-storage-account", "FUNCTIONS_WORKER_RUNTIME": "node" }, "ConnectionStrings": {} }
How to run for local dev
To run the Depict-It app, first install the npm modules and the modules in the api directory, then back in the root directory, run the start script:
npm install cd api npm install cd ../ npm run start | https://vuejsexamples.com/a-hilarious-peer-to-peer-drawing-game-built-with-vue-js-using-ably-channels/ | CC-MAIN-2021-43 | refinedweb | 7,905 | 57.57 |
1321/remove-extra-spaces-from-string-in-java
I have a string like this:
mysz = "name=john age=13 year=2001";
I want to remove the whitespaces in the string. I tried trim() but this removes only whitespaces before and after the whole string. I also tried replaceAll("\\W", "") but then the = also gets removed.
How can I achieve a string with:
mysz2 = "name=johnage=13year=2001"
st.replaceAll("\\s+","") and st.replaceAll("\\s","") produce the same result.
The second regex is 20% faster than the first one, but as the number consecutive spaces increases, the first one performs better than the second one.
Assign the value to a variable, if not use directly:
st = st.replaceAll("\\s+","")
replaceAll("\\s","")
\w = Anything that is a word character
\W = Anything that isn't a word character (including punctuation etc)
\s = Anything that is a space character (including space, tab characters etc)
\S = Anything that isn't a space character (including both letters and numbers, as well as punctuation etc)
If you need to remove unbreakable spaces too, you can upgrade your code like this :
st.replaceAll("[\\s|\\u00A0]+", "");
import java.util.regex.Matcher;
import java.util.regex.Pattern;
String pattern="[\\s]";
String replace="";
part="name=john age=13 year=2001";
Pattern p=Pattern.compile(pattern);
Matcher m=p.matcher(part);
part=m.replaceAll(replace);
System.out.println(part);
This prints true (even though we don't use equals method: correct ...READ MORE
List<String> al = new ArrayList<>();
// add elements ...READ MORE
You can use ArrayUtils class remove method which ...READ MORE
We can use external libraries:
org.apache.commons.lang.ArrayUtils.remove(java.lang.Object[] array, int ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
String s="yourstring";
boolean flag = true;
for(int i=0;i<s.length();i++)
{
...READ MORE
An alternative is to use String.format:
double[] arr = ...READ MORE
String fooString1 = new String("foo");
String fooString2 = ...READ MORE
We can do this in 2 ways:
String ...READ MORE
Double temp = Double.valueOf(str);
number = temp.doubleValue(); READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/1321/remove-extra-spaces-from-string-in-java?show=18414 | CC-MAIN-2020-10 | refinedweb | 351 | 60.92 |
SignalWire is an experimental Server<->Client plumbing project I started, that magically wires up your HTML5 front end to the collections/tables in your data store or ORM, and I just pushed the first commit to Github. SignalWire uses SignalR and Roslyn libraries to implement features like
- Exposing collections in your Back end directly over the wire for CRUD operations and for performing LINQ queries from your JavaScript.
- Enable using C# in your HTML applications.
I got inspired by Meteor and started SignalWire as a POC to implement some of these features on the .NET stack, but I’ve got few more ideas on the way and thought about pushing this to Github. See the codebase in Github
Check out this quick video (no sound, sorry), a simple Taskboard built using SignalWire. See how we are accessing the collections in Javascript, and check how we are issuing LINQ queries
As of now, support is available for EntityFramework and MongoDb as the back end. Also, it provides
- A light weight permission framework (wip)
- A Client side Javascript API to access your collections with minimal/no serverside code
- Model Validation support using normal C#/ASP.NET Data Validation attributes
- LINQ support to issue Linq queires from HTML pages (highly experimental, as of now no sandboxing support per caller)
- C# code in your HTML pages (wip)
How To Start
You may start with Creating an Empty ASP.NET Project in Visual Studio 2012 (You need .NET 4.5 as we are using Roslyn September CTP libraries). Then, install SignalWire using Nuget. In Package Manager console
Install-Package SignalWire
This will add the following components to your project.
- Models\TaskDb.cs - An example Entity Framework Data Context. You can use your instead
- Hubs\DataHub.cs - An example Datahub.
- Scripts\SignalWire.js - Clientside JQuery Pluin for SignalR.
Now, goto Index.html, and verify all your JS file versions are correct. Run the application and see
Server – Data Context and Hub
The Only server side code you need is your POCO objects and a hub inherited from the DataHub base class, which is defined in the SignalWire library. You need to use the Collections attribute to map a POCO class with the set/collection, and make sure you always do that in lower case.
//Simple POCO class to represent a task [Collection("tasks")] public class Task { [Required] public int Id { get; set; } [Required] [MaxLength(100, ErrorMessage = "Subject cannot be longer than 40 characters.")] public string Subject { get; set; } [Required] [MaxLength(200, ErrorMessage = "Details cannot be longer than 40 characters.")] public string Details { get; set; } public bool Completed { get; set; } } //Simple demo db context public class TaskDb : DbContext { public DbSet<Task> Tasks { get; set; } }
Now, you need a hub. By convention, Wire assume’s the server hub’s name as Data if it is not specified in the init method of $.wire.init(..). In the example you get when you install the Nuget package, you’ll see we are using an Entity Framework Context Provider, for TaskDb which is a DataContext. Replace TaskDb with your own EF Data context if required. Once you do that, all your collections/sets with in the context can be accessed over the wire. In the example, you’ve only on set in your TaskDb data context as you can see above - that is Tasks. Let us create the Hub.
public class Data : DataHub<EFContextProvider<TaskDb>>
That's all you need to access collections via the Wire.
Client - Initializing and Issuing Queries
SignalWire magically exposes all your Tables/Sets/Collections in your Data back end via the $.wire Javascript object at client side. You can initialize $.wire using the init() method, which returns a JQuery Deferred. Here is a quick example regarding initializing Wire and issuing a LINQ query.
//Initialize wire $.wire.init().done(function () { //Now you can access the tasks collection in your data context //You can issue a LINQ Query. $.wire.tasks.query("from Task t in Tasks select t") .done(function (result) { $.each(result, function (index, task) { //Do something with each task }); }).fail(function (result) { alert(JSON.stringify(result.Error)); }); });
Other Wire Methods
You can use $.wire.yourcollection.add(..) to add objects to a specific collection. The add method will return the added item with updated Id up on completion.
var t = { "subject": $("#subject").val(), "details": $("#details").val(), }; //Add a task to the Tasks collection $.wire.tasks.add(t) .done(function (task) { //Note that you'll get the auto generated Id }).fail(function (result) { //result.Error contains the error if any //result.ValidationResults contains the Validation results if any alert(JSON.stringify(result.Error)); });
Similarly, you can use
- $.wire.yourcollection.remove(item) to remove items
- $.wire.yourcollection.update(item) to update an item (based on Id match).
- $.wire.yourcollection.read({"query":"expression", "skip":"xxx", "take":"xxx"}) to read from a specific collection
Permissions
You can decorate your Model classes with Custom permission attributes that implements IPermission. This will check if the user in the current context can actually perform a specific operation on a Model entity.
What's More
I'm planning the following features based on my time
- Sandboxed execution for C# scripts in HTML page that can access client side context
- Publishing and data sync
- Permission based events, may be using something like PushQA.
Fork on Github to play around | http://www.amazedsaint.com/2012/09/signalwire-magical-plumbing-with-your.html | CC-MAIN-2016-30 | refinedweb | 883 | 57.06 |
When an app that is running in the Python 2 runtime sends a request to another App Engine app, it can use the App Engine App Identity API to assert its identity. The app that receives the request can use this identity to determine if it should process the request.
The App Identity API is not available in the Python 3 runtime, so if your Python 3 apps need to assert their identity when sending requests to other App Engine apps, you can use OpenID Connect (OIDC) ID tokens that are issued and decoded by Google's OAuth 2.0 APIs.
Here's an overview of using OIDC ID tokens to assert and verify identity:
- An App Engine app named "App A" retrieves an ID token from the Google Cloud runtime environment.
- App A adds this token to a request header just before sending the request to App B, which is another App Engine app.
- App B uses Google's OAuth 2.0 APIs to verify the token payload. The decoded payload contains the verified identity of App A in the form of the email address of App A's default service account.
- App B compares the identity in the payload to a list of identities that it is allowed to respond to. If the request came from an allowed app, App B processes the request and responds.
This guide describes how to update your App Engine apps to use OpenID Connect (OIDC) ID tokens to assert identity, and update your other App Engine apps to use ID tokens to verify identity before processing a request.
Key differences between the App Identity and OIDC APIs
Apps in the Python 2 runtime don't need to explicitly assert identity. When an app uses the
httplib,
urllib, or
urllib2Python libraries or the App Engine URL Fetch service to send outbound requests, the runtime uses the App Engine URL Fetch service to make the request. If the request is being sent to the
appspot.comdomain, URL Fetch automatically asserts the identity of the requesting app by adding the
X-Appengine-Inbound-Appidheader to the request. That header contains the app's application ID (also called the project ID).
Apps in the Python 3 runtime do need to explicitly assert identity by retrieving an OIDC ID token from the Google Cloud runtime environment and adding it to the request header. You will need to update all of the code that sends requests to other App Engine apps so that the requests contain an OIDC ID token.
The
X-Appengine-Inbound-Appidheader in a request contains the project ID of the app that sent the request.
The payload of Google's OIDC ID token contains the email address of the app's default service account. The username portion of this email address is the same as the project ID. While you don't need to change the list of project IDs that an app will allow requests from, you will need to add some code to extract the username from the token payload.
The quotas for URL Fetch API calls are different from Google's OAuth 2.0 APIs quotas for granting tokens. You can see the maximum number of tokens you can grant per day in the Cloud Console OAuth consent screen. Neither URL Fetch, the App Identity API, nor Google's OAuth 2.0 APIs incur billing.
Overview of the migration process
To migrate your Python apps to use OIDC APIs to assert and verify identity:
In apps that need to assert identity when sending requests to other App Engine apps:
Wait until your app is running in a Python 3 environment to migrate to ID tokens.
While it is possible to use ID tokens in the Python 2 runtime, the steps in Python 2 are complex and are needed only temporarily until you update your app to run in the Python 3 runtime.
Once your app is running in Python 3, update the app to request an ID token and add the token to a request header.
In apps that need to verify identity before processing a request:
Start by upgrading your Python 2 apps to support both ID tokens and the App Identity API identities. This will enable your apps to verify and process requests from either Python 2 apps that use the App Identity API or Python 3 apps that use ID tokens.
Once your upgraded Python 2 apps are stable, migrate them to the Python 3 runtime. Keep supporting both ID tokens and the App Identity API identities until you are certain that the apps no longer need to support requests from legacy apps.
When you no longer need to process requests from legacy App Engine apps, remove the code that verifies App Identity API identities.
After testing your apps, deploy the app that processes requests first. Then deploy your updated Python 3 app that uses ID tokens to assert identity.
Asserting identity
Wait until your app is running in a Python 3 environment, then follow these steps to upgrade the app to assert identity with ID tokens:
Install the
google-authclient library.
Add code to request an ID token from Google's OAuth 2.0 APIs and add the token to a request header before sending a request.
-
Installing the
google-auth client library for Python 3 apps
To make the
google-auth client library available to your Python3 app,
create a
requirements.txt file in the same folder as your
app.yaml
file and add the following line:
google-auth
When you deploy your app, App Engine will download all of the
dependencies that are defined in the
requirements.txt file.
For local development, we recommend that you install dependencies in a virtual environment such as venv.
Adding code to assert identity
Search through your code and find all instances of sending requests to other App Engine apps. Update those instances to do the following before sending the request:
Add the following imports:
from google.auth.transport import requests as reqs from google.oauth2 import id_token
Use
google.oauth2.id_token.fetch_id_token(request, audience)to retrieve an ID token. Include the following parameters in the method call:
request: Pass the request object you're getting ready to send.
audience: Pass the URL of the app that you're sending the request to. This binds the token to the request and prevents the token from being used by another app.
For clarity and specificity, we recommend that you pass the
appspot.comURL that App Engine created for the specific service that is receiving the request, even if you use a custom domain for the app.
In your request object, set the following header:
'Authorization': 'ID {}'.format(token) flask import Flask, render_template, request from google.auth.transport import requests as reqs from google.oauth2 import id_token import requests app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') @app.route('/', methods=['POST']) def make_request(): url = request.form['url'] token = id_token.fetch_id_token(reqs.Request(), url) resp = requests.get( url, headers={'Authorization': 'Bearer {}'.format(token)} ) message = 'Response when calling {}:\n\n'.format(url) message += resp.text return message, 200, {'Content-type': 'text/plain'}
Testing updates for asserting identity
To run your app locally and test if the app can successfully send ID tokens:
Follow these steps to make the credentials of the default App Engine service account available in your local environment (the Google OAuth APIs require these credentials to generate an ID token):
Enter the following
gcloudcommand to retrieve the service account key for your project's default App Engine account:
gcloud iam service-accounts keys create ~/key.json --iam-account project-ID@appspot.gserviceaccount.com
Replace project-ID with the ID of your Google Cloud project.
The service account key file is now downloaded to your machine. You can move and rename this file however you would like. Make sure you store this file securely, because it can be used to authenticate as your service account. If you lose the file, or if the file is exposed to unauthorized users, delete the service account key and create a new one.
Enter the following command:
<code>export GOOGLE_APPLICATION_CREDENTIALS=<var>service-account-key</var></code>
Replace service-account-key with the absolute pathname of the file that contains the service account key you downloaded.
In the same shell in which you exported the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable, start your Python app.
Send a request from the app and confirm that it succeeds. If you don't already have an app that can receive requests and use ID tokens to verify identities:
- Download the sample "incoming" app.
In the sample's
main.pyfile, add the ID of your Google Cloud project to the
allowed_app_ids. For example:
allowed_app_ids = [ '<APP_ID_1>', '<APP_ID_2>', 'my-project-id' ]
Run the updated sample in the Python 2 local development server.
Verifying and processing requests
To upgrade your Python 2 apps to use either ID tokens or App Identity API identities before processing requests:
Install the google-auth client library.
Update your code to do the following:
If the request contains the
X-Appengine-Inbound-Appidheader, use that header to verify identity. Apps running in a legacy runtime such as Python 2 will contain this header.
If the request does not contain the
X-Appengine-Inbound-Appidheader, check for an OIDC ID token. If the token exists, verify the token payload and check the identity of the sender.
-
Installing the google-auth client library for Python 2 apps
To make the
google-auth client library available to your Python 2 app:
Create a
requirements.txtfile in the same folder as your
app.yamlfile and add the following line:
google-auth==1.19.2
We recommend you use the 1.19.2 version of the Cloud Logging client library since it supports Python 2.7 apps.
In your app's
app.yamlfile, specify the SSL library in the
librariessection if it isn't already specified:
libraries: - name: ssl version: latest
Create a directory to store your third-party libraries, such as
lib/. Then use
pip installto install the libraries into the directory. For example:
pip install -t lib -r requirements.txt
Create an
appengine_config.pyfile in the same folder as your
app.yamlf)
The
appengine_config.pyfile in the preceding example assumes that')
For local development, we recommend that you install dependencies in a virtual environment such as virtualenv for Python 2.
Updating code for verifying requests
Search through your code and find all instances of getting the value of the
X-Appengine-Inbound-Appid header. Update those instances to do the following:
Add the following imports:
from google.auth.transport import requests as reqs from google.oauth2 import id_token
If the incoming request doesn't contain the
X-Appengine-Inbound-Appidheader, look for the
Authorizationheader and retrieve its value.
The header value is formatted as "ID: token".
Use
google.oauth2.id_token.verify_oauth2_token(token, request, audience)to verify and retrieve the decoded token payload. Include the following parameters in the method call:
token: Pass the token you extracted from the incoming request.
request: Pass a new
google.auth.transport.Requestobject.
audience: Pass the URL of the current app (the app that is sending the verification request). Google's authorization server will compare this URL to the URL that was provided when the token was originally generated. If the URLs don't match, the token will not be verified and the authorization server will return an error.
The
verify_oauth2_tokenmethod returns the decoded token payload, which contains several name/value pairs, including the email address of the default service account for the app that generated the token.
Extract the username from the email address in the token payload.
The username is the same as the project ID of that app that sent the request. This is the same value that was previously returned in the
X-Appengine-Inbound-Appidheader.
If the username/project ID is in the list of allowed project IDs, process the request.. """ Authenticate requests coming from other App Engine instances. """ from google.oauth2 import id_token from google.auth.transport import requests import logging import webapp2 def get_app_id(request): # Requests from App Engine Standard for Python 2.7 will include a # trustworthy X-Appengine-Inbound-Appid. Other requests won't have # that header, as the App Engine runtime will strip it out incoming_app_id = request.headers.get( 'X-Appengine-Inbound-Appid', None) if incoming_app_id is not None: return incoming_app_id # Other App Engine apps can get an ID token for the App Engine default # service account, which will identify the application ID. They will # have to include at token in an Authorization header to be recognized # by this method. auth_header = request.headers.get('Authorization', None) if auth_header is None: return None # The auth_header must be in the form Authorization: Bearer token. bearer, token = auth_header.split() if bearer.lower() != 'bearer': return None try: info = id_token.verify_oauth2_token(token, requests.Request()) service_account_email = info['email'] incoming_app_id, domain = service_account_email.split('@') if domain != 'appspot.gserviceaccount.com': # Not App Engine svc acct return None else: return incoming_app_id except Exception as e: # report or log if desired, as here: logging.warning('Request has bad OAuth2 id token: {}'.format(e)) return None class MainPage(webapp2.RequestHandler): allowed_app_ids = [ 'other-app-id', 'other-app-id-2' ] def get(self): incoming_app_id = get_app_id(self.request) if incoming_app_id is None: self.abort(403) if incoming_app_id not in self.allowed_app_ids: self.abort(403) self.response.write('This is a protected page.') app = webapp2.WSGIApplication([ ('/', MainPage) ], debug=True)
Testing updates for verifying identity
To test that your app can use either an ID token or
the
X-Appengine-Inbound-Appid header to verify requests, run the app in
the Python 2
local development server
and send requests from Python 2 apps (which will use the App Identity API) and
from Python 3 apps that send ID tokens.
If you haven't updated your apps to send ID tokens:
Download the sample "requesting" app.
Add service account credentials to your local environment as described in Testing updates for asserting apps.
Use standard Python 3 commands to start the Python 3 sample app.
Send a request from the sample app and confirm that it succeeds.
Deploying your apps
Once you are ready to deploy your apps, you should:
Test the apps on App Engine.
If the apps run without errors, use traffic splitting to slowly ramp up traffic for your updated apps. Monitor the apps closely for any issues before routing more traffic to the updated apps.
Using a different service account for asserting identity
When you request an ID token, the request uses the identity of the App Engine default service account by default. When you verify the token, the token payload contains the email address of the default service account, which maps to the project ID of your app.
The App Engine default service account has a very high level of permission by default. It can view and edit your entire Google Cloud project, so in most cases this account is not appropriate to use when your app needs to authenticate with Cloud services.
However, the default service account is safe to use when asserting app identity because you are only using the ID token to verify the identity of the app that sent a request. The actual permissions that have been granted to the service account are not considered or needed during this process.
If you still prefer to use a different service account for your ID token requests, do the following:
Set an environment variable named
GOOGLE_APPLICATION_CREDENTIALSto the path of a JSON file that contains the credentials of the service account. See our recommendations for safely storing these credentials.
Use
google.oauth2.id_token.fetch_id_token(request, audience)to retrieve an ID token.
When you verify this token, the token payload will contain the email address of the new service account. | https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3/migrate-app-identity?hl=ar | CC-MAIN-2021-17 | refinedweb | 2,631 | 55.13 |
i want to print the line number and column number of all the occurance of a string "//" in a given input file. I am able to count the number of occurance, but not with the column number
#include <stdio.h> int main () { FILE * pFile; int c; int n = 0; pFile=fopen ("testing.c","r"); if (pFile==NULL) perror ("Error opening file"); else { do { c = fgetc (pFile); if (c == '/') { c = fgetc (pFile); if (c == '/') { n++; } } } while (c != EOF); fclose (pFile); printf ("The file contains %d comment characters (//).\n",n); } return 0; }
*** MOD EDIT: Added code tags. Please
This post has been edited by JackOfAllTrades: 21 April 2009 - 04:45 AM | http://www.dreamincode.net/forums/topic/100704-finding-the-line-number-and-column-number-of-a-character-in-a-file/ | CC-MAIN-2017-22 | refinedweb | 109 | 82.04 |
!20428 [ncurses.git] / NEWS 1 ------------------------------------------------------------------------------- 2 -- Copyright (c) 1998.1899 2012/04/28 22:50:4420428 49 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 50 + add eslok flag to dec+sl -TD 51 + dec+sl applies to vt320 and up -TD 52 + drop wsl width from xterm+sl -TD 53 + reuse xterm+sl in putty and nsca-m -TD 54 + add ansi+tabs to vt520 -TD 55 + add ansi+enq to vt220-vt520 -TD 56 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 57 + added paragraph in keyname manpage telling how extended capabilities 58 are interpreted as key definitions. 59 + modify tic's check of conflicting key definitions to include extended 60 capability strings in addition to the existing check on predefined 61 keys. 62 63 20120421 64 + improve cleanup of temporary files in tic using atexit(). 65 + add msgr to vt420, similar DEC vtXXX entries -TD 66 + add several missing vt420 capabilities from vt220 -TD 67 + factor out ansi+pp from several entries -TD 68 + change xterm+sl and xterm+sl-twm to include only the status-line 69 capabilities and not "use=xterm", making them more generally useful 70 as building-blocks -TD 71 + add dec+sl building block, as example -TD 72 73 20120414 74 + add XT to some terminfo entries to improve usefulness for other 75 applications than screen, which would like to pretend that xterm's 76 title is a status-line. -TD 77 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 78 of ordering and overrides -TD 79 + add consistency check in tic for screen's "XT" capability. 80 + add section in terminfo.src summarizing the user-defined capabilities 81 used in that file -TD 82 83 20120407 84 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 85 non-standard capabilities, while infocmp was ignoring only the user 86 definable capabilities. 87 + improve special case in tic parsing of description to allow it to be 88 followed by terminfo capabilities. Previously the description had to 89 be the last field on an input line to allow tic to distinguish 90 between termcap and terminfo format while still allowing commas to be 91 embedded in the description. 92 + correct variable name in gen_edit.sh which broke configurability of 93 the --with-xterm-kbs option. 94 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 95 + further amend 20110910 change, providing for configure-script 96 override of the "linux" terminfo entry to install and changing the 97 default for that to "linux2.2" (Debian #665959). 98 99 20120331 100 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 101 + correct order of use-clauses in st-256color -TD 102 + modify configure script to look for gnatgcc if the Ada95 binding 103 is built, in preference to the default gcc/cc (suggested by 104 Nicolas Boulenguez). 105 + modify configure script to ensure that the same -On option used for 106 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 107 using "-O3" (suggested by Nicolas Boulenguez) 108 109 20120324 110 + amend an old fix so that next_char() exits properly for empty files, 111 e.g., from reading /dev/null (cf: 20080804). 112 + modify tic so that it can read from the standard input, or from 113 a character device. Because tic uses seek's, this requires writing 114 the data to a temporary file first (prompted by remark by Sven 115 Joachim) (cf: 20000923). 116 117 20120317 118 + correct a check made in lib_napms.c, so that terminfo applications 119 can again use napms() (cf: 20110604). 120 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 121 (cf: 20040327). 122 + correct scripting for --disable-echo option in test/configure. 123 + amend check for missing c++ compiler to work when no error is 124 reported, and no variables set (cf: 20021206). 125 + add/use configure macro CF_DISABLE_ECHO. 126 127 20120310 128 + fix some strict compiler warnings for abi6 and 64-bits. 129 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 130 + improve a limit-check in infocmp.c (Werner Fink): 131 132 20120303 133 + minor tidying of terminfo.tail, clarify reason for limitation 134 regarding mapping of \0 to \200 135 + minor improvement to _nc_copy_termtype(), using memcpy to replace 136 loops. 137 + fix no-leaks checking in test/demo_termcap.c to account for multiple 138 calls to setupterm(). 139 + modified the libgpm change to show previous load as a problem in the 140 debug-trace. 141 > merge some patches from OpenSUSE rpm (Werner Fink): 142 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 143 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 144 runtime linker 145 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 146 fallback entries 147 148 20120228 149 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 150 151 20120225 152 + modify configure script to allow creating dll's for MinGW when 153 cross-compiling. 154 + add --enable-string-hacks option to control whether strlcat and 155 strlcpy may be used. The same issue applies to OpenBSD's warnings 156 about snprintf, noting that this function is weakly standardized. 157 + add configure checks for strlcat, strlcpy and snprintf, to help 158 reduce bogus warnings with OpenBSD builds. 159 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 160 (cf:20111231) 161 + update config.guess, config.sub 162 163 20120218 164 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 165 the first success (patch by Pierre Labastie). 166 + improve configure macro CF_MKSTEMP by moving existence-check for 167 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 168 + improve configure macro CF_FUNC_POLL from luit changes to detect 169 broken implementations, e.g., with Mac OS X. 170 + add configure option --with-tparm-arg 171 + build-fix for MinGW cross-compiling, so that make_hash does not 172 depend on TTY definition (cf: 20111008). 173 174 20120211 175 + make sgr for xterm-pcolor agree with other caps -TD 176 + make sgr for att5425 agree with other caps -TD 177 + make sgr for att630 agree with other caps -TD 178 + make sgr for linux entries agree with other caps -TD 179 + make sgr for tvi9065 agree with other caps -TD 180 + make sgr for ncr260vt200an agree with other caps -TD 181 + make sgr for ncr160vt100pp agree with other caps -TD 182 + make sgr for ncr260vt300an agree with other caps -TD 183 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 184 + make sgr for cygwin, cygwinDBG agree with other caps -TD 185 + add configure option --with-xterm-kbs to simplify configuration for 186 Linux versus most other systems. 187 188 20120204 189 + improved tic -D option, avoid making target directory and provide 190 better diagnostics. 191 192 20120128 193 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 194 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 195 + make sgr for sun-color agree with smso -TD 196 + make sgr for prism9 agree with other caps -TD 197 + make sgr for icl6404 agree with other caps -TD 198 + make sgr for ofcons agree with other caps -TD 199 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 200 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 201 + make sgr for avt-ns agree with other caps -TD 202 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 203 give verbose warnings (cf: 20110730). 204 + modify misc/gen-edit.sh to make the location of the tabset directory 205 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 206 (Debian #653435, patch by Sven Joachim). 207 208 20120121 209 + add --with-lib-prefix option to allow configuring for old/new flavors 210 of OS/2 EMX. 211 + modify check for gnat version to allow for year, as used in FreeBSD 212 port. 213 + modify check_existence() in db_iterator.c to simply check if the 214 path is a directory or file, according to the need. Checking for 215 directory size also gives no usable result with OS/2 (cf: 20120107). 216 + support OS/2 kLIBC (patch by KO Myung-Han). 217 218 20120114 219 + several improvements to test/movewindow.c (prompted by discussion on 220 Linux Mint forum): 221 + modify movement commands to make them continuous 222 + rewrote the test for mvderwin 223 + rewrote the test for recursive mvwin 224 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 225 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 226 and OpenBSD. 227 + regenerated html manpages. 228 229 20120107 230 + various improvments for MinGW (Juergen Pfeifer): 231 + modify stat() calls to ignore the st_size member 232 + drop mk-dlls.sh script. 233 + change recommended regular expression library. 234 + modify rain.c to allow for threaded configuraton. 235 + modify tset.c to allow for case when size-change logic is not used. 236 237 20111231 238 + modify toe's report when -a and -s options are combined, to add 239 a column showing which entries belong to a given database. 240 + add -s option to toe, to sort its output. 241 + modify progs/toe.c, simplifying use of db-iterator results to use 242 caching improvements from 20111001 and 20111126. 243 + correct generation of pc-files when ticlib or termlib options are 244 given to rename the corresponding tic- or tinfo-libraries (report 245 by Sven Joachim). 246 247 20111224 248 + document a portability issue with tput, i.e., that scripts which work 249 with ncurses may fail in other implementations that do no parameter 250 analysis. 251 + add putty-sco entry -TD 252 253 20111217 254 + review/fix places in manpages where --program-prefix configure option 255 was not being used. 256 + add -D option to infocmp, to show the database locations that it 257 could use. 258 + fix build for the special case where term-driver, ticlib and termlib 259 are all enabled. The terminal driver depends on a few features in 260 the base ncurses library, so tic's dependencies include both ncurses 261 and termlib. 262 + fix build work for term-driver when --enable-wgetch-events option is 263 enabled. 264 + use <stdint.h> types to fix some questionable casts to void*. 265 266 20111210 267 + modify configure script to check if thread library provides 268 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 269 + modify configure script to suppress check to define _XOPEN_SOURCE 270 for IRIX64, since its header files have a conflict versus 271 _SGI_SOURCE. 272 + modify configure script to add ".pc" files for tic- and 273 tinfo-libraries, which were omitted in recent change (cf: 20111126). 274 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 275 276 20111203 277 + modify configure-check for etip.h dependencies, supplying a temporary 278 copy of ncurses_dll.h since it is a generated file (prompted by 279 Debian #646977). 280 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 281 282 20111126 283 + correct database iterator's check for duplicate entries 284 (cf: 20111001). 285 + modify database iterator to ignore $TERMCAP when it is not an 286 absolute pathname. 287 + add -D option to tic, to show the database locations that it could 288 use. 289 + improve description of database locations in tic manpage. 290 + modify the configure script to generate a list of the ".pc" files to 291 generate, rather than deriving the list from the libraries which have 292 been built (patch by Mike Frysinger). 293 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 294 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 295 from patch by Mike Frysinger). 296 297 20111119 298 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 299 from curses.priv.h, fixing a regression with IRIX64 and Tru64 300 (cf: 20110416) 301 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 302 i.e., the database iterator was not initialized. This case is needed 303 to when tic is translating to termcap, rather than loading the 304 database (cf: 20111001). 305 306 20111112 307 + add pccon entries for OpenBSD console (Alexei Malinin). 308 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 309 600 to work around inconsistent ifdef'ing of wcstof between C and 310 C++ header files. 311 + modify capconvert script to accept more than exact match on "xterm", 312 e.g., the "xterm-*" variants, to exclude from the conversion (patch 313 by Robert Millan). 314 + add -lc_r as alternative for -lpthread, allows build of threaded code 315 in older FreeBSD machines. 316 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 317 _POSIX_SOURCE are defined. 318 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 319 320 20111030 321 + modify make_db_path() to allow creating "terminfo.db" in the same 322 directory as an existing "terminfo" directory. This fixes a case 323 where switching between hashed/filesystem databases would cause the 324 new hashed database to be installed in the next best location - 325 root's home directory. 326 + add variable cf_cv_prog_gnat_correct to those passed to 327 config.status, fixing a problem with Ada95 builds (cf: 20111022). 328 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 329 accommodate broken implementations for _XPG6. 330 + eliminate usage of NULL symbol from etip.h, to reduce header 331 interdependencies. 332 + add configure check to decide when to add _XOPEN_SOURCE define to 333 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 334 This is a workaround for gcc 4.6, which fails to build the c++ 335 binding if that symbol is defined by the application, due to 336 incorrectly combining the corresponding feature test macros 337 (report by Peter Kruse). 338 339 20111022 340 + correct logic for discarding mouse events, retaining the partial 341 events used to build up click, double-click, etc, until needed 342 (cf: 20110917). 343 + fix configure script to avoid creating unused Ada95 makefile when 344 gnat does not work. 345 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 346 internal functions of libncurses. The external interface of courses 347 uses bool, which still produces these warnings. 348 349 20111015 350 + improve description of --disable-tic-depends option to make it 351 clear that it may be useful whether or not the --with-termlib 352 option is also given (report by Sven Joachim). 353 + amend termcap equivalent for set_pglen_inch to use the X/Open 354 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 355 + improve manpage for tgetent differences from termcap library. 356 357 20111008 358 + moved static data from db_iterator.c to lib_data.c 359 + modify db_iterator.c for memory-leak checking, fix one leak. 360 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 361 of ncurses rather than Requires, as well as Libs.private for the 362 other library dependencies (prompted by Debian #644728). 363 364 20111001 365 + modify tic "-K" option to only set the strict-flag rather than force 366 source-output. That allows the same flag to control the parser for 367 input and output of termcap source. 368 + modify _nc_getent() to ignore backslash at the end of a comment line, 369 making it consistent with ncurses' parser. 370 + restore a special-case check for directory needed to make termcap 371 text files load as if they were databases (cf: 20110924). 372 + modify tic's resolution/collision checking to attempt to remove the 373 conflicting alias from the second entry in the pair, which is 374 normally following in the source file. Also improved the warning 375 message to make it simpler to see which alias is the problem. 376 + improve performance of the database iterator by caching search-list. 377 378 20110925 379 + add a missing "else" in changes to _nc_read_tic_entry(). 380 381 20110924 382 + modify _nc_read_tic_entry() so that hashed-database is checked before 383 filesystem. 384 + updated CF_CURSES_LIBS check in test/configure script. 385 + modify configure script and makefiles to split TIC_ARGS and 386 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 387 to help separate searches for tic- and tinfo-libraries (patch by Nick 388 Alcock aka "Nix"). 389 + build-fix for lib_mouse.c changes (cf: 20110917). 390 391 20110917 392 + fix compiler warning for clang 2.9 393 + improve merging of mouse events (integrated patch by Damien 394 Guibouret). 395 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 396 (patch by Damien Guibouret). 397 398 20110910 399 + modify misc/gen_edit.sh to select a "linux" entry which works with 400 the current kernel rather than assuming it is always "linux3.0" 401 (cf: 20110716). 402 + revert a change to getmouse() which had the undesirable side-effect 403 of suppressing button-release events (report by Damien Guibouret, 404 cf: 20100102). 405 + add xterm+kbs fragment from xterm #272 -TD 406 + add configure option --with-pkg-config-libdir to provide control over 407 the actual directory into which pc-files are installed, do not use 408 the pkg-config environment variables (discussion with Frederic L W 409 Meunier). 410 + add link to mailing-list archive in announce.html.in, as done in 411 FAQ (prompted by question by Andrius Bentkus). 412 + improve manpage install by adjusting the "#include" examples to 413 show the ncurses-subdirectory used when --disable-overwrite option 414 is used. 415 + install an alias for "curses" to the ncurses manpage, tied to the 416 --with-curses-h configure option (suggested by Reuben Thomas). 417 418 20110903 419 + propagate error-returns from wresize, i.e., the internal 420 increase_size and decrease_size functions through resize_term (report 421 by Tim van der Molen, cf: 20020713). 422 + fix typo in tset manpage (patch by Sven Joachim). 423 424 20110820 425 + add a check to ensure that termcap files which might have "^?" do 426 not use the terminfo interpretation as "\177". 427 + minor cleanup of X-terminal emulator section of terminfo.src -TD 428 + add terminator entry -TD 429 + add simpleterm entry -TD 430 + improve wattr_get macros by ensuring that if the window pointer is 431 null, then the attribute and color values returned will be zero 432 (cf: 20110528). 433 434 20110813 435 + add substitution for $RPATH_LIST to misc/ncurses-config.in 436 + improve performance of tic with hashed-database by caching the 437 database connection, using atexit() to cleanup. 438 + modify treatment of 2-character aliases at the beginning of termcap 439 entries so they are not counted in use-resolution, since these are 440 guaranteed to be unique. Also ignore these aliases when reporting 441 the primary name of the entry (cf: 20040501) 442 + double-check gn (generic) flag in terminal descriptions to 443 accommodate old/buggy termcap databases which misused that feature. 444 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 445 error-return. 446 447 20110807 448 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 449 variable is defined in the makefiles which use it. 450 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 451 + build-fixes for NetBSD 5.1 with termcap support enabled. 452 + corrected k9 in dg460-ansi, add other features based on manuals -TD 453 + improve trimming of whitespace at the end of terminfo/termcap output 454 from tic/infocmp. 455 + when writing termcap source, ensure that colons in the description 456 field are translated to a non-delimiter, i.e., "=". 457 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 458 use a single line. 459 + add a null-pointer check when handling the $CC variable. 460 461 20110730 462 + modify configure script and makefiles in c++ and progs to allow the 463 directory used for rpath option to be overridden, e.g., to work 464 around updates to the variables used by tic during an install. 465 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 466 for termcap output. 467 + add _nc_strict_bsd variable in tic library which controls the 468 "strict" BSD termcap compatibility from 20110723, plus these 469 features: 470 + allow escapes such as "\8" and "\9" when reading termcap 471 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 472 termcap files, passing through "a", "e", etc. 473 + expand "\:" as "\072" on output. 474 + modify _nc_get_token() to reset the token's string value in case 475 there is a string-typed token lacking the "=" marker. 476 + fix a few memory leaks in _nc_tgetent. 477 + fix a few places where reading from a termcap file could refer to 478 freed memory. 479 + add an overflow check when converting terminfo/termcap numeric 480 values, since terminfo stores those in a short, and they must be 481 positive. 482 + correct internal variables used for translating to termcap "%>" 483 feature, and translating from termcap %B to terminfo, needed by 484 tctest (cf: 19991211). 485 + amend a minor fix to acsc when loading a termcap file to separate it 486 from warnings needed for tic (cf: 20040710) 487 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 488 a termcap file to be handled via TERMINFO_DIRS. 489 + modify _nc_infotocap() to include non-mandatory padding when 490 translating to termcap. 491 + modify _nc_read_termcap_entry(), passing a flag in the case where 492 getcap is used, to reduce interactive warning messages. 493 494 20110723 495 + add a check in start_color() to limit color-pairs to 256 when 496 extended colors are not supported (patch by David Benjamin). 497 + modify setcchar to omit no-longer-needed OR'ing of color pair in 498 the SetAttr() macro (patch by David Benjamin). 499 + add kich1 to sun terminfo entry (Yuri Pankov) 500 + use bold rather than reverse for smso in sun-color terminfo entry 501 (Yuri Pankov). 502 + improve generation of termcap using tic/infocmp -C option, e.g., 503 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 504 regarding Schilling's test program): 505 + translate %02 and %03 to %2 and %3 respectively. 506 + suppress string capabilities which use %s, not supported by tgoto 507 + use \040 rather than \s 508 + expand null characters as \200 rather than \0 509 + modify configure script to support shared libraries for DragonFlyBSD. 510 511 20110716 512 + replace an assert() in _nc_Free_Argument() with a regular null 513 pointer check (report/analysis by Franjo Ivancic). 514 + modify configure --enable-pc-files option to take into account the 515 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 516 + add/use xterm+tmux chunk from xterm #271 -TD 517 + resync xterm-new entry from xterm #271 -TD 518 + add E3 extended capability to linux-basic (Miroslav Lichvar) 519 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 520 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 521 + fix inconsistent tabset path in pcmw (Todd C. Miller). 522 + remove a backslash which continued comment, obscuring altos3 523 definition with OpenBSD toolset (Nicholas Marriott). 524 525 20110702 526 + add workaround from xterm #271 changes to ensure that compiler flags 527 are not used in the $CC variable. 528 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 529 7.1 with both gcc 4.2.4 and cc. 530 + modify configure checks for AIX to include release 7.x 531 + add loader flags/libraries to libtool options so that dynamic loading 532 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 533 at gentoo prefix repository (patch by Michael Haubenwallner). 534 535 20110626 536 + move include of nc_termios.h out of term_entry.h, since the latter 537 is installed, e.g., for tack while the former is not (report by 538 Sven Joachim). 539 540 20110625 541 + improve cleanup() function in lib_tstp.c, using _exit() rather than 542 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 543 comments forwarded by Nicholas Marriott). 544 + reduce name pollution from term.h, moving fallback #define's for 545 tcgetattr(), etc., to new private header nc_termios.h (report by 546 Sergio NNX). 547 + two minor fixes for tracing (patch by Vassili Courzakis). 548 + improve trace initialization by starting it in use_env() and 549 ripoffline(). 550 + review old email, add details for some changelog entries. 551 552 20110611 553 + update minix entry to minix 3.2 (Thomas Cort). 554 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 555 556 20110604 557 + fixes for MirBSD port: 558 + set default prefix to /usr. 559 + add support for shared libraries in configure script. 560 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 561 + add a few more checks based on ncurses/link_test. 562 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 563 564 20110528 565 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 566 + used ncurses/link_test to check for behavior when the terminal has 567 not been initialized and when an application passes null pointers 568 to the library. Added checks to cover this (prompted by Redhat 569 #707344). 570 + modify MKlib_gen.sh to make its main() function call each function 571 with zero parameters, to help find inconsistent checking for null 572 pointers, etc. 573 574 20110521 575 + fix warnings from clang 2.7 "--analyze" 576 577 20110514 578 + compiler-warning fixes in panel and progs. 579 + modify CF_PKG_CONFIG macro, from changes to tin -TD 580 + modify CF_CURSES_FUNCS configure macro, used in test directory 581 configure script: 582 + work around (non-optimizer) bug in gcc 4.2.1 which caused 583 test-expression to be omitted from executable. 584 + force the linker to see a link-time expression of a symbol, to 585 help work around weak-symbol issues. 586 587 20110507 588 + update discussion of MKfallback.sh script in INSTALL; normally the 589 script is used automatically via the configured makefiles. However 590 there are still occasions when it might be used directly by packagers 591 (report by Gunter Schaffler). 592 + modify misc/ncurses-config.in to omit the "-L" option from the 593 "--libs" output if the library directory is /usr/lib. 594 + change order of tests for curses.h versus ncurses.h headers in the 595 configure scripts for Ada95 and test-directories, to look for 596 ncurses.h, from fixes to tin -TD 597 + modify ncurses/tinfo/access.c to account for Tandem's root uid 598 (report by Joachim Schmitz). 599 600 20110430 601 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 602 is not used when building a static library (report by Nicolas 603 Boulenguez): 604 + Ada95 build-fix for big-endian architectures such as sparc. This 605 undoes one of the fixes from 20110319, which added an "Unused" member 606 to representation clauses, replacing that with pragmas to suppress 607 warnings about unused bits (patch by Nicolas Boulenguez): 608 609 20110423 610 + add check in test/configure for use_window, use_screen. 611 + add configure-checks for getopt's variables, which may be declared 612 as different types on some Unix systems. 613 + add check in test/configure for some legacy curses types of the 614 function pointer passed to tputs(). 615 + modify init_pair() to accept -1's for color value after 616 assume_default_colors() has been called (Debian #337095). 617 + modify test/background.c, adding commmand-line options to demonstrate 618 assume_default_colors() and use_default_colors(). 619 620 20110416 621 + modify configure script/source-code to only define _POSIX_SOURCE if 622 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 623 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 624 + update config.guess, config.sub 625 626 20110409 627 + fixes to build c++ binding with clang 3.0 (patch by Alexander 628 Kolesen). 629 + add check for unctrl.h in test/configure, to work around breakage in 630 some ncurses packages. 631 + add "--disable-widec" option to test/configure script. 632 + add "--with-curses-colr" and "--with-curses-5lib" options to the 633 test/configure script to address testing with very old machines. 634 635 20110404 5.9 release for upload to 636 637 20110402 638 + various build-fixes for the rpm/dpkg scripts. 639 + add "--enable-rpath-link" option to Ada95/configure, to allow 640 packages to suppress the rpath feature which is normally used for 641 the in-tree build of sample programs. 642 + corrected definition of libdir variable in Ada95/src/Makefile.in, 643 needed for rpm script. 644 + add "--with-shared" option to Ada95/configure script, to allow 645 making the C-language parts of the binding use appropriate compiler 646 options if building a shared library with gnat. 647 648 20110329 649 > portability fixes for Ada95 binding: 650 + add configure check to ensure that SIGINT works with gnat. This is 651 needed for the "rain" sample program. If SIGINT does not work, omit 652 that sample program. 653 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 654 + add ncurses_compat.c, to supply functions used in the Ada95 binding 655 which were added in 5.7 and later. 656 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 657 upon GNU sed. 658 659 20110326 660 + add special check in Ada95/configure script for ncurses6 reentrant 661 code. 662 + regen Ada html documentation. 663 + build-fix for Ada shared libraries versus the varargs workaround. 664 + add rpm and dpkg scripts for Ada95 and test directories, for test 665 builds. 666 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 667 CF_X_ATHENA_LIBS. 668 + add configure check to determine if gnat's project feature supports 669 libraries, i.e., collections of .ali files. 670 + make all dereferences in Ada95 samples explicit. 671 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 672 + add configure check for, ifdef's for math.h which is in a separate 673 package on Solaris and potentially not installed (report by Petr 674 Pavlu). 675 > fixes for Ada95 binding (Nicolas Boulenguez): 676 + improve type-checking in Ada95 by eliminating a few warning-suppress 677 pragmas. 678 + suppress unreferenced warnings. 679 + make all dereferences in binding explicit. 680 681 20110319 682 + regen Ada html documentation. 683 + change order of -I options from ncurses*-config script when the 684 --disable-overwrite option was used, so that the subdirectory include 685 is listed first. 686 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 687 + modify configure script to provide value for HTML_DIR in 688 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 689 distributed separately (report by Nicolas Boulenguez). 690 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 691 CFLAGS for the build has these options. 692 + amend change from 20070324, to not add 1 to the result of getmaxx 693 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 694 thread in comp.lang.ada). 695 + build-fix Ada95/samples for gnat 4.5 696 + spelling fixes for Ada95/samples/explain.txt 697 > fixes for Ada95 binding (Nicolas Boulenguez): 698 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 699 + add workaround for binding to set_field_type(), which uses varargs. 700 The original binding from 990220 relied on the prevalent 701 implementation of varargs which did not support or need va_copy(). 702 + add dependency on gen/Makefile.in needed for *-panels.ads 703 + add Library_Options to library.gpr 704 + add Languages to library.gpr, for gprbuild 705 706 20110307 707 + revert changes to limit-checks from 20110122 (Debian #616711). 708 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 709 + corrected a minor sign error in a field of Low_Level_Field_Type, to 710 conform to form.h. 711 + replaced C_Int by Curses_Bool as return type for some callbacks, see 712 fieldtype(3FORM). 713 + modify samples/sample-explain.adb to provide explicit message when 714 explain.txt is not found. 715 716 20110305 717 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 718 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 719 for compiler warnings (report by Nicolas Boulenguez). 720 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 721 needed to compile when chtype is 64-bits using gnat 4.4.5 722 723 20110226 5.8 release for upload to 724 725 20110226 726 + update release notes, for 5.8. 727 + regenerated html manpages. 728 + change open() in _nc_read_file_entry() to fopen() for consistency 729 with write_file(). 730 + modify misc/run_tic.in to create parent directory, in case this is 731 a new install of hashed database. 732 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 733 734 20110220 735 + configure script rpath fixes from xterm #269. 736 + workaround for cygwin's non-functional features.h, to force ncurses' 737 configure script to define _XOPEN_SOURCE_EXTENDED when building 738 wide-character configuration. 739 + build-fix in run_tic.sh for OS/2 EMX install 740 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 741 742 20110212 743 + regenerated html manpages. 744 + use _tracef() in show_where() function of tic, to work correctly with 745 special case of trace configuration. 746 747 20110205 748 + add xterm-utf8 entry as a demo of the U8 feature -TD 749 + add U8 feature to denote entries for terminal emulators which do not 750 support VT100 SI/SO when processing UTF-8 encoding -TD 751 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 752 extended terminfo capability U8 (prompted by mailing list 753 discussion). 754 755 20110122 756 + start documenting interface changes for upcoming 5.8 release. 757 + correct limit-checks in derwin(). 758 + correct limit-checks in newwin(), to ensure that windows have nonzero 759 size (report by Garrett Cooper). 760 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 761 Alcock). 762 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 763 by discussion with Kevin Martin). 764 765 20110115 766 + modify Ada95/configure script to make the --with-curses-dir option 767 work without requiring the --with-ncurses option. 768 + modify test programs to allow them to be built with NetBSD curses. 769 + document thick- and double-line symbols in curs_add_wch.3x manpage. 770 + document WACS_xxx constants in curs_add_wch.3x manpage. 771 + fix some warnings for clang 2.6 "--analyze" 772 + modify Ada95 makefiles to make html-documentation with the project 773 file configuration if that is used. 774 + update config.guess, config.sub 775 776 20110108 777 + regenerated html manpages. 778 + minor fixes to enable lint when trace is not enabled, e.g., with 779 clang --analyze. 780 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 781 + update ncurses/llib-lncurses* 782 783 20110101 784 + fix remaining strict compiler warnings in ncurses library ABI=5, 785 except those dealing with function pointers, etc. 786 787 20101225 788 + modify nc_tparm.h, adding guards against repeated inclusion, and 789 allowing TPARM_ARG to be overridden. 790 + fix some strict compiler warnings in ncurses library. 791 792 20101211 793 + suppress ncv in screen entry, allowing underline (patch by Alejandro 794 R Sedeno). 795 + also suppress ncv in konsole-base -TD 796 + fixes in wins_nwstr() and related functions to ensure that special 797 characters, i.e., control characters are handled properly with the 798 wide-character configuration. 799 + correct a comparison in wins_nwstr() (Redhat #661506). 800 + correct help-messages in some of the test-programs, which still 801 referred to quitting with 'q'. 802 803 20101204 804 + add special case to _nc_infotocap() to recognize the setaf/setab 805 strings from xterm+256color and xterm+88color, and provide a reduced 806 version which works with termcap. 807 + remove obsolete emacs "Local Variables" section from documentation 808 (request by Sven Joachim). 809 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 810 (report by Sven Joachim). 811 812 20101128 813 + modify test/configure and test/Makefile.in to handle this special 814 case of building within a build-tree (Debian #34182): 815 mkdir -p build && cd build && ../test/configure && make 816 817 20101127 818 + miscellaneous build-fixes for Ada95 and test-directories when built 819 out-of-tree. 820 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 821 + fix typo in rmso for tek4106 entry -Goran Weinholt 822 823 20101120 824 + improve checks in test/configure for X libraries, from xterm #267 825 changes. 826 + modify test/configure to allow it to use the build-tree's libraries 827 e.g., when using that to configure the test-programs without the 828 rpath feature (request by Sven Joachim). 829 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 830 for compatibility, but generally deprecating those since the VTE 831 library is what actually defines the behavior of "gnome", etc., 832 since 2003 -TD 833 834 20101113 835 + compiler warning fixes for test programs. 836 + various build-fixes for test-programs with pdcurses. 837 + updated configure checks for X packages in test/configure from xterm 838 #267 changes. 839 + add configure check to gnatmake, to accommodate cygwin. 840 841 20101106 842 + correct list of sub-directories needed in Ada95 tree for building as 843 a separate package. 844 + modify scripts in test-directory to improve builds as a separate 845 package. 846 847 20101023 848 + correct parsing of relative tab-stops in tabs program (report by 849 Philip Ganchev). 850 + adjust configure script so that "t" is not added to library suffix 851 when weak-symbols are used, allowing the pthread configuration to 852 more closely match the non-thread naming (report by Werner Fink). 853 + modify configure check for tic program, used for fallbacks, to a 854 warning if not found. This makes it simpler to use additonal 855 scripts to bootstrap the fallbacks code using tic from the build 856 tree (report by Werner Fink). 857 + fix several places in configure script using ${variable-value} form. 858 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 859 which do not support selectively linking against static libraries 860 (report by John P. Hartmann) 861 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 862 863 20101009 864 + correct comparison used for setting 16-colors in linux-16color 865 entry (Novell #644831) -TD 866 + improve linux-16color entry, using "dim" for color-8 which makes it 867 gray rather than black like color-0 -TD 868 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 869 external package "cindent". 870 871 20101002 872 + improve linkages in html manpages, adding references to the newer 873 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 874 + add checks in tic for inconsistent cursor-movement controls, and for 875 inconsistent printer-controls. 876 + fill in no-parameter forms of cursor-movement where a parameterized 877 form is available -TD 878 + fill in missing cursor controls where the form of the controls is 879 ANSI -TD 880 + fix inconsistent punctuation in form_variables manpage (patch by 881 Sven Joachim). 882 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 883 > patch by Juergen Pfeifer: 884 + document how to build 32-bit libraries in README.MinGW 885 + fixes to filename computation in mk-dlls.sh.in 886 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 887 Joachim). 888 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 889 distinguish between 32-bit and 64-bit hosts. The result is stored 890 in mingw_arch 891 892 20100925 893 + add "XT" capability to entries for terminals that support both 894 xterm-style mouse- and title-controls, for "screen" which 895 special-cases TERM beginning with "xterm" or "rxvt" -TD 896 > patch by Juergen Pfeifer: 897 + use 64-Bit MinGW toolchain (recommended package from TDM, see 898 README.MinGW). 899 + support pthreads when using the TDM MinGW toolchain 900 901 20100918 902 + regenerated html manpages. 903 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 904 + add manpage for sp-funcs. 905 + add sp-funcs to test/listused.sh, for documentation aids. 906 907 20100911 908 + add manpages for summarizing public variables of curses-, terminfo- 909 and form-libraries. 910 + minor fixes to manpages for consistency (patch by Jason McIntyre). 911 + modify tic's -I/-C dump to reformat acsc strings into canonical form 912 (sorted, unique mapping) (cf: 971004). 913 + add configure check for pthread_kill(), needed for some old 914 platforms. 915 916 20100904 917 + add configure option --without-tests, to suppress building test 918 programs (request by Frederic L W Meunier). 919 920 20100828 921 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 922 + add check in terminfo source-reader to provide more informative 923 message when someone attempts to run tic on a compiled terminal 924 description (prompted by Debian #593920). 925 + note in infotocap and captoinfo manpages that they read terminal 926 descriptions from text-files (Debian #593920). 927 + improve acsc string for vt52, show arrow keys (patch by Benjamin 928 Sittler). 929 930 20100814 931 + document in manpages that "mv" functions first use wmove() to check 932 the window pointer and whether the position lies within the window 933 (suggested by Poul-Henning Kamp). 934 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 935 by Tim van der Molen). 936 + modify configure script to transform library names for tic- and 937 tinfo-libraries so that those build properly with Mac OS X shared 938 library configuration. 939 + modify configure script to ensure that it removes conftest.dSYM 940 directory leftover on checks with Mac OS X. 941 + modify configure script to cleanup after check for symbolic links. 942 943 20100807 944 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 945 (cf: 20100724) 946 + improve configure checks for location of tic and infocmp programs 947 used for installing database and for generating fallback data, 948 e.g., for cross-compiling. 949 + add Markus Kuhn's wcwidth function for compiling MinGW 950 + add special case to CF_REGEX for cross-compiling to MinGW target. 951 952 20100731 953 + modify initialization check for win32con driver to eliminate need for 954 special case for TERM "unknown", using terminal database if available 955 (prompted by discussion with Roumen Petrov). 956 + for MinGW port, ensure that terminal driver is setup if tgetent() 957 is called (patch by Roumen Petrov). 958 + document tabs "-0" and "-8" options in manpage. 959 + fix Debian "lintian" issues with manpages reported in 960 961 962 20100724 963 + add a check in tic for missing set_tab if clear_all_tabs given. 964 + improve use of symbolic links in makefiles by using "-f" option if 965 it is supported, to eliminate temporary removal of the target 966 (prompted by) 967 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 968 after exit from 'm' main-menu command. 969 + improved ncu-indent, from mawk changes, allows more than one of 970 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 971 972 20100717 973 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 974 the alternate character set (patch by Nicholas Marriott) 975 + remove tar-copy.sh and related configure/Makefile chunks, since the 976 Ada95 binding is now installed using rules in Ada95/src. 977 978 20100703 979 + continue integrating changes to use gnatmake project files in Ada95 980 + add/use configure check to turn on project rules for Ada95/src. 981 + revert the vfork change from 20100130, since it does not work. 982 983 20100626 984 + continue integrating changes to use gnatmake project files in Ada95 985 + old gnatmake (3.15) does not produce libraries using project-file; 986 work around by adding script to generate alternate makefile. 987 988 20100619 989 + continue integrating changes to use gnatmake project files in Ada95 990 + add configure --with-ada-sharedlib option, for the test_make rule. 991 + move Ada95-related logic into aclocal.m4, since additional checks 992 will be needed to distinguish old/new implementations of gnat. 993 994 20100612 995 + start integrating changes to use gnatmake project files in Ada95 tree 996 + add test_make / test_clean / test_install rules in Ada95/src 997 + change install-path for adainclude directory to /usr/share/ada (was 998 /usr/lib/ada). 999 + update Ada95/configure. 1000 + add mlterm+256color entry, for mlterm 3.0.0 -TD 1001 + modify test/configure to use macros to ensure consistent order 1002 of updating LIBS variable. 1003 1004 20100605 1005 + change search order of options for Solaris in CF_SHARED_OPTS, to 1006 work with 64-bit compiles. 1007 + correct quoting of assignment in CF_SHARED_OPTS case for aix 1008 (cf: 20081227) 1009 1010 20100529 1011 + regenerated html documentation. 1012 + modify test/configure to support pkg-config for checking X libraries 1013 used by PDCurses. 1014 + add/use configure macro CF_ADD_LIB to force consistency of 1015 assignments to $LIBS, etc. 1016 + fix configure script for combining --with-pthread 1017 and --enable-weak-symbols options. 1018 1019 20100522 1020 + correct cross-compiling configure check for CF_MKSTEMP macro, by 1021 adding a check cache variable set by AC_CHECK_FUNC (report by 1022 Pierre Labastie). 1023 + simplify include-dependencies of make_hash and make_keys, to reduce 1024 the need for setting BUILD_CPPFLAGS in cross-compiling when the 1025 build- and target-machines differ. 1026 + repair broken-linker configuration by restoring a definition of SP 1027 variable to curses.priv.h, and adjusting for cases where sp-funcs 1028 are used. 1029 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 1030 variable to override (prompted by report by Pablo Cazallas). 1031 1032 20100515 1033 + add configure option --enable-pthreads-eintr to control whether the 1034 new EINTR feature is enabled. 1035 + modify logic in pthread configuration to allow EINTR to interrupt 1036 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 1037 + drop mkdirs.sh, use "mkdir -p". 1038 + add configure option --disable-libtool-version, to use the 1039 "-version-number" feature which was added in libtool 1.5 (report by 1040 Peter Haering). The default value for the option uses the newer 1041 feature, which makes libraries generated using libtool compatible 1042 with the standard builds of ncurses. 1043 + updated test/configure to match configure script macros. 1044 + fixes for configure script from lynx changes: 1045 + improve CF_FIND_LINKAGE logic for the case where a function is 1046 found in predefined libraries. 1047 + revert part of change to CF_HEADER (cf: 20100424) 1048 1049 20100501 1050 + correct limit-check in wredrawln, accounting for begy/begx values 1051 (patch by David Benjamin). 1052 + fix most compiler warnings from clang. 1053 + amend build-fix for OpenSolaris, to ensure that a system header is 1054 included in curses.h before testing feature symbols, since they 1055 may be defined by that route. 1056 1057 20100424 1058 + fix some strict compiler warnings in ncurses library. 1059 + modify configure macro CF_HEADER_PATH to not look for variations in 1060 the predefined include directories. 1061 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 1062 with gcc 4.x's c89 alias, which gives warning messages for cases 1063 where older versions would produce an error. 1064 1065 20100417 1066 + modify _nc_capcmp() to work with cancelled strings. 1067 + correct translation of "^" in _nc_infotocap(), used to transform 1068 terminfo to termcap strings 1069 + add configure --disable-rpath-hack, to allow disabling the feature 1070 which adds rpath options for libraries in unusual places. 1071 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 1072 directory was already added. 1073 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 1074 directories (which will be ignored). 1075 1076 20100410 1077 + improve win_driver.c handling of mouse: 1078 + discard motion events 1079 + avoid calling _nc_timed_wait when there is a mouse event 1080 + handle 4th and "rightmost" buttons. 1081 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 1082 cases where there are embedded blanks in the rpath option. 1083 1084 20100403 1085 + add configure check for exctags vs ctags, to work around pkgsrc. 1086 + simplify logic in _nc_get_screensize() to make it easier to see how 1087 environment variables may override system- and terminfo-values 1088 (prompted by discussion with Igor Bujna). 1089 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 1090 + improve handling of color-pairs embedded in attributes for the 1091 extended-colors configuration. 1092 + modify MKlib_gen.sh to build link_test with sp-funcs. 1093 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 1094 configuration as well as for rpath feature in *-config scripts. 1095 1096 20100327 1097 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 1098 reusable. 1099 + improve configure CF_REGEX, similar fixes. 1100 + improve configure CF_FIND_LINKAGE, adding add check between system 1101 (default) and explicit paths, where we can find the entrypoint in the 1102 given library. 1103 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 1104 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 1105 Ensure that Gpm_Close() is called in this case. 1106 1107 20100320 1108 + rename atari and st52 terminfo entries to atari-old, st52-old, use 1109 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 1110 Hourihane). 1111 1112 20100313 1113 + modify install-rule for manpages so that *-config manpages will 1114 install when building with --srcdir (report by Sven Joachim). 1115 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 1116 option is not the same as --disable-leaks (GenToo #305889). 1117 + modify #define's for build-compiler to suppress cchar_t symbol from 1118 compile of make_hash and make_keys, improving cross-compilation of 1119 ncursesw (report by Bernhard Rosenkraenzer). 1120 + modify CF_MAN_PAGES configure macro to replace all occurrences of 1121 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 1122 Kaseorg). 1123 1124 20100306 1125 + generate manpages for the *-config scripts, adapted from help2man 1126 (suggested by Sven Joachim). 1127 + use va_copy() in _nc_printf_string() to avoid conflicting use of 1128 va_list value in _nc_printf_length() (report by Wim Lewis). 1129 1130 20100227 1131 + add Ada95/configure script, to use in tar-file created by 1132 Ada95/make-tar.sh 1133 + fix typo in wresize.3x (patch by Tim van der Molen). 1134 + modify screen-bce.XXX entries to exclude ech, since screen's color 1135 model does not clear with color for that feature -TD 1136 1137 20100220 1138 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 1139 making those separately distributable. 1140 + build-fix for static libraries without dlsym (Debian #556378). 1141 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 1142 Schwarze). 1143 1144 20100213 1145 + add several screen-bce.XXX entries -TD 1146 1147 20100206 1148 + update mrxvt terminfo entry -TD 1149 + modify win_driver.c to support mouse single-clicks. 1150 + correct name for termlib in ncurses*-config, e.g., if it is renamed 1151 to provide a single file for ncurses/ncursesw libraries (patch by 1152 Miroslav Lichvar). 1153 1154 20100130 1155 + use vfork in test/ditto.c if available (request by Mike Frysinger). 1156 + miscellaneous cleanup of manpages. 1157 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 1158 + build-fix for --srcdir (patch by Miroslav Lichvar). 1159 1160 20100123 1161 + for term-driver configuration, ensure that the driver pointer is 1162 initialized in setupterm so that terminfo/termcap programs work. 1163 + amend fix for Debian #542031 to ensure that wattrset() returns only 1164 OK or ERR, rather than the attribute value (report by Miroslav 1165 Lichvar). 1166 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 1167 _nc_screen_of() compatible between normal/wide libraries again (patch 1168 by Miroslav Lichvar) 1169 + review/fix include-dependencies in modules files (report by Miroslav 1170 Lichvar). 1171 1172 20100116 1173 + modify win_driver.c to initialize acs_map for win32 console, so 1174 that line-drawing works. 1175 + modify win_driver.c to initialize TERMINAL struct so that programs 1176 such as test/lrtest.c and test/ncurses.c which test string 1177 capabilities can run. 1178 + modify term-driver modules to eliminate forward-reference 1179 declarations. 1180 1181 20100109 1182 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 1183 consistently to add new -D's while removing duplicates. 1184 + modify a few configure macros to consistently put new options 1185 before older in the list. 1186 + add tiparm(), based on review of X/Open Curses Issue 7. 1187 + minor documentation cleanup. 1188 + update config.guess, config.sub from 1189 1190 (caveat - its maintainer put 2010 copyright date on files dated 2009) 1191 1192 20100102 1193 + minor improvement to tic's checking of similar SGR's to allow for the 1194 most common case of SGR 0. 1195 + modify getmouse() to act as its documentation implied, returning on 1196 each call the preceding event until none are left. When no more 1197 events remain, it will return ERR. 1198 1199 20091227 1200 + change order of lookup in progs/tput.c, looking for terminfo data 1201 first. This fixes a confusion between termcap "sg" and terminfo 1202 "sgr" or "sgr0", originally from 990123 changes, but exposed by 1203 20091114 fixes for hashing. With this change, only "dl" and "ed" are 1204 ambiguous (Mandriva #56272). 1205 1206 20091226 1207 + add bterm terminfo entry, based on bogl 0.1.18 -TD 1208 + minor fix to rxvt+pcfkeys terminfo entry -TD 1209 + build-fixes for Ada95 tree for gnat 4.4 "style". 1210 1211 20091219 1212 + remove old check in mvderwin() which prevented moving a derived 1213 window whose origin happened to coincide with its parent's origin 1214 (report by Katarina Machalkova). 1215 + improve test/ncurses.c to put mouse droppings in the proper window. 1216 + update minix terminfo entry -TD 1217 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 1218 1219 20091212 1220 + correct transfer of multicolumn characters in multirow 1221 field_buffer(), which stopped at the end of the first row due to 1222 filling of unused entries in a cchar_t array with nulls. 1223 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 1224 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 1225 nulls. 1226 + use strdup() in set_menu_mark(), restore .marklen struct member on 1227 failure. 1228 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 1229 tset.c per 1230 1231 (patch by Nicholas Marriott). 1232 + replace a malloc in tic.c with strdup, checking for failure (patch by 1233 Nicholas Marriott). 1234 + update config.guess, config.sub from 1235 1236 1237 20091205 1238 + correct layout of working window used to extract data in 1239 wide-character configured by set_field_buffer (patch by Rafael 1240 Garrido Fernandez) 1241 + improve some limit-checks related to filename length in reading and 1242 writing terminfo entries. 1243 + ensure that filename is always filled in when attempting to read 1244 a terminfo entry, so that infocmp can report the filename (patch 1245 by Nicholas Marriott). 1246 1247 20091128 1248 + modify mk-1st.awk to allow tinfo library to be built when term-driver 1249 is enabled. 1250 + add error-check to configure script to ensure that sp-funcs is 1251 enabled if term-driver is, since some internal interfaces rely upon 1252 this. 1253 1254 20091121 1255 + fix case where progs/tput is used while sp-funcs is configure; this 1256 requires save/restore of out-character function from _nc_prescreen 1257 rather than the SCREEN structure (report by Charles Wilson). 1258 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 1259 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 1260 1261 20091114 1262 1263 + updated man/curs_trace.3x 1264 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 1265 + change a variable name in lib_newwin.c to make it clearer which 1266 value is being freed on error (patch by Nicholas Marriott). 1267 1268 20091107 1269 + improve test/ncurses.c color-cycling test by reusing attribute- 1270 and color-cycling logic from the video-attributes screen. 1271 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 1272 library which help make it compatible with interop applications 1273 (patch by Juergen Pfeifer). 1274 + add configure option --enable-interop, for integrating changes 1275 for generic/interop support to form-library by Juergen Pfeifer 1276 1277 20091031 1278 + modify use of $CC environment variable which is defined by X/Open 1279 as a curses feature, to ignore it if it is not a single character 1280 (prompted by discussion with Benjamin C W Sittler). 1281 + add START_TRACE in slk_init 1282 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 1283 soft-keys, broken in 20090927 merging. 1284 + change initialization of "hidden" flag for soft-keys from true to 1285 false, broken in 20090704 merging (Ubuntu #464274). 1286 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 1287 discussion with Fabian Groffen in GenToo #206201). 1288 + add test/xterm-256color.dat 1289 1290 20091024 1291 + quiet some pedantic gcc warnings. 1292 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 1293 SIGWINCH, and discard that value, to avoid confusing application 1294 (patch by Eygene Ryabinkin, FreeBSD bin/136223). 1295 1296 20091017 1297 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 1298 a possibly colon-separated list (Debian #550716). 1299 1300 20091010 1301 + supply a null-terminator to buffer in _nc_viswibuf(). 1302 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 1303 + minor fixes to error-returns in default function for tputs, as well 1304 as in lib_screen.c 1305 1306 20091003 1307 + add WACS_xxx definitions to wide-character configuration for thick- 1308 and double-lines (discussion with Slava Zanko). 1309 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 1310 + add ccc and initc capabilities to xterm-16color -TD 1311 > patch by Benjamin C W Sittler: 1312 + add linux-16color 1313 + correct initc capability of linux-c-nc end-of-range 1314 + similar change for dg+ccc and dgunix+ccc 1315 1316 20090927 1317 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 1318 that module since 20090711 is in libtinfo. 1319 + add configure option --enable-term-driver, to allow compiling with 1320 terminal-driver. That is used in MinGW port, and (being somewhat 1321 more complicated) is an experimental alternative to the conventional 1322 termlib internals. Currently, it requires the sp-funcs feature to 1323 be enabled. 1324 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 1325 library (some work remains for forms library). 1326 1327 20090919 1328 + document return code from define_key (report by Mike Gran). 1329 + make some symbolic links in the terminfo directory-tree shorter 1330 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 1331 + fix some groff warnings in terminfo.5, etc., from recent Debian 1332 changes. 1333 + change ncv and op capabilities in sun-color terminfo entry to match 1334 Sun's entry for this (report by Laszlo Peter). 1335 + improve interix smso terminfo capability by using reverse rather than 1336 bold (report by Kristof Zelechovski). 1337 1338 20090912 1339 + add some test programs (and make these use the same special keys 1340 by sharing linedata.h functions): 1341 test/test_addstr.c 1342 test/test_addwstr.c 1343 test/test_addchstr.c 1344 test/test_add_wchstr.c 1345 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 1346 inserting wide characters, since the wins_wch() function that it used 1347 did not update the cursor position (report by Ciprian Craciun). 1348 1349 20090906 1350 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 1351 work. 1352 + add null-pointer checks to other opaque-functions. 1353 + add is_pad() and is_subwin() functions for opaque access to WINDOW 1354 (discussion with Mark Dickinson). 1355 + correct merge to lib_newterm.c, which broke when sp-funcs was 1356 enabled. 1357 1358 20090905 1359 + build-fix for building outside source-tree (report by Sven Joachim). 1360 + fix Debian lintian warning for man/tabs.1 by making section number 1361 agree with file-suffix (report by Sven Joachim). 1362 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1363 1364 20090829 1365 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 1366 amd64 (Debian #542031). 1367 + fix typo in curs_mouse.3x (Debian #429198). 1368 1369 20090822 1370 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1371 1372 20090815 1373 + correct use of terminfo capabilities for initializing soft-keys, 1374 broken in 20090509 merging. 1375 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 1376 in non-blocking mode (patch by Clemens Ladisch). 1377 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 1378 help with builds on non-Unix platforms such as OS/2 EMX. 1379 + modify scripting for misc/run_tic.sh to test configure script's 1380 $cross_compiling variable directly rather than comparing host/build 1381 compiler names (prompted by comment in GenToo #249363). 1382 + fix configure script option --with-database, which was coded as an 1383 enable-type switch. 1384 + build-fixes for --srcdir (report by Frederic L W Meunier). 1385 1386 20090808 1387 + separate _nc_find_entry() and _nc_find_type_entry() from 1388 implementation details of hash function. 1389 1390 20090803 1391 + add tabs.1 to man/man_db.renames 1392 + modify lib_addch.c to compensate for removal of wide-character test 1393 from unctrl() in 20090704 (Debian #539735). 1394 1395 20090801 1396 + improve discussion in INSTALL for use of system's tic/infocmp for 1397 cross-compiling and building fallbacks. 1398 + modify test/demo_termcap.c to correspond better to options in 1399 test/demo_terminfo.c 1400 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1401 + fix logic for 'V' in test/ncurses.c tests f/F. 1402 1403 20090728 1404 + correct logic in tigetnum(), which caused tput program to treat all 1405 string capabilities as numeric (report by Rajeev V Pillai, 1406 cf: 20090711). 1407 1408 20090725 1409 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1410 1411 20090718 1412 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 1413 20070704 changes. 1414 + modify _nc_find_type_entry() to use hashing. 1415 + make CCHARW_MAX value configurable, noting that changing this would 1416 change the size of cchar_t, and would be ABI-incompatible. 1417 + modify test-programs, e.g,. test/view.c, to address subtle 1418 differences between Tru64/Solaris and HPUX/AIX getcchar() return 1419 values. 1420 + modify length returned by getcchar() to count the trailing null 1421 which is documented in X/Open (cf: 20020427). 1422 + fixes for test programs to build/work on HPUX and AIX, etc. 1423 1424 20090711 1425 + improve performance of tigetstr, etc., by using hashing code from tic. 1426 + minor fixes for memory-leak checking. 1427 + add test/demo_terminfo, for comparison with demo_termcap 1428 1429 20090704 1430 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 1431 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 1432 unctrl(). 1433 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1434 1435 20090627 1436 + update llib-lncurses[wt] to use sp-funcs. 1437 + various code-fixes to build/work with --disable-macros configure 1438 option. 1439 + add several new files from Juergen Pfeifer which will be used when 1440 integration of "sp-funcs" is complete. This includes a port to 1441 MinGW. 1442 1443 20090613 1444 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 1445 make includes of term.h without curses.h work (report by "Nix"). 1446 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1447 1448 20090607 1449 + fix a regression in lib_tputs.c, from ongoing merges. 1450 1451 20090606 1452 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1453 1454 20090530 1455 + fix an infinite recursion when adding a legacy-coding 8-bit value 1456 using insch() (report by Clemens Ladisch). 1457 + free home-terminfo string in del_curterm() (patch by Dan Weber). 1458 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1459 1460 20090523 1461 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1462 1463 20090516 1464 + work around antique BSD game's manipulation of stdscr, etc., versus 1465 SCREEN's copy of the pointer (Debian #528411). 1466 + add a cast to wattrset macro to avoid compiler warning when comparing 1467 its result against ERR (adapted from patch by Matt Kraii, Debian 1468 #528374). 1469 1470 20090510 1471 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1472 1473 20090502 1474 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1475 + add vwmterm terminfo entry (patch by Bryan Christ). 1476 1477 20090425 1478 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1479 1480 20090419 1481 + build fix for _nc_free_and_exit() change in 20090418 (report by 1482 Christian Ebert). 1483 1484 20090418 1485 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1486 1487 20090411 1488 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1489 This change finishes merging for menu and panel libraries, does 1490 part of the form library. 1491 1492 20090404 1493 + suppress configure check for static/dynamic linker flags for gcc on 1494 Darwin (report by Nelson Beebe). 1495 1496 20090328 1497 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 1498 function key definitions from emx-base for consistency -TD 1499 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 1500 by Kalle Olavi Niemitalo). 1501 + improve test/ncurses.c 'F' test, show combining characters in color. 1502 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 1503 a temporary variable. 1504 + use _nc_doalloc() rather than realloc() in a few places in ncurses 1505 library to avoid leak in out-of-memory condition (reports by William 1506 Egert and Martin Ettl based on cppcheck tool). 1507 + add --with-ncurses-wrap-prefix option to test/configure (discussion 1508 with Charles Wilson). 1509 + use ncurses*-config scripts if available for test/configure. 1510 + update test/aclocal.m4 and test/configure 1511 > patches by Charles Wilson: 1512 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 1513 version numbers (e.g. which include alphabetic chars, as well as 1514 digits, after the final '.'). 1515 + improve use of -no-undefined option for libtool by setting an 1516 intermediate variable LT_UNDEF in the configure script, and then 1517 using that in the libtool link-commands. 1518 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 1519 from 2009031 changes. 1520 + improve mk-1st.awk script by writing separate cases for the 1521 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 1522 termlib) is to be linked. 1523 + modify configure.in to allow broken-linker configurations, not just 1524 enable-reentrant, to set public wrap prefix. 1525 1526 20090321 1527 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 1528 build with tic and term libraries (patch by Charles Wilson). 1529 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 1530 (report by Charles Wilson). 1531 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 1532 the form, menu or panel libraries (patch by Charles Wilson). 1533 + add configure option --with-wrap-prefix to allow setting the prefix 1534 for functions used to wrap global variables to something other than 1535 "_nc_" (discussion with Charles Wilson). 1536 1537 20090314 1538 + modify scripts to generate ncurses*-config and pc-files to add 1539 dependency for tinfo library (patch by Charles Wilson). 1540 + improve comparison of program-names when checking for linked flavors 1541 such as "reset" by ignoring the executable suffix (reports by Charles 1542 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 1543 list). 1544 + suppress configure check for static/dynamic linker flags for gcc on 1545 Solaris 10, since gcc is confused by absence of static libc, and 1546 does not switch back to dynamic mode before finishing the libraries 1547 (reports by Joel Bertrand, Alan Pae). 1548 + minor fixes to Intel compiler warning checks in configure script. 1549 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 1550 + modify set_curterm() to make broken-linker configuration work with 1551 changes from 20090228 (report by Charles Wilson). 1552 1553 20090228 1554 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1555 + modify declaration of cur_term when broken-linker is used, but 1556 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 1557 1558 20090221 1559 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1560 1561 20090214 1562 + add configure script --enable-sp-funcs to enable the new set of 1563 extended functions. 1564 + start integrating patches by Juergen Pfeifer: 1565 + add extended functions which specify the SCREEN pointer for several 1566 curses functions which use the global SP (these are incomplete; 1567 some internals work is needed to complete these). 1568 + add special cases to configure script for MinGW port. 1569 1570 20090207 1571 + update several configure macros from lynx changes 1572 + append (not prepend) to CFLAGS/CPPFLAGS 1573 + change variable from PATHSEP to PATH_SEPARATOR 1574 + improve install-rules for pc-files (patch by Miroslav Lichvar). 1575 + make it work with $DESTDIR 1576 + create the pkg-config library directory if needed. 1577 1578 20090124 1579 + modify init_pair() to allow caller to create extra color pairs beyond 1580 the color_pairs limit, which use default colors (request by Emanuele 1581 Giaquinta). 1582 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 1583 + fix typo "==" where "=" is needed in ncurses-config.in and 1584 gen-pkgconfig.in files (Debian #512161). 1585 1586 20090117 1587 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 1588 *BSD's, without which "main" might be one of the shared library's 1589 dependencies (report/analysis by Ken Dickey). 1590 + modify waddch_literal(), updating line-pointer after a multicolumn 1591 character is found to not fit on the current row, and wrapping is 1592 done. Since the line-pointer was not updated, the wrapped 1593 multicolumn character was written to the beginning of the current row 1594 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 1595). 1596 1597 20090110 1598 + add screen.Eterm terminfo entry (GenToo #124887) -TD 1599 + modify adacurses-config to look for ".ali" files in the adalib 1600 directory. 1601 + correct install for Ada95, which omitted libAdaCurses.a used in 1602 adacurses-config 1603 + change install for adacurses-config to provide additional flavors 1604 such as adacursesw-config, for ncursesw (GenToo #167849). 1605 1606 20090105 1607 + remove undeveloped feature in ncurses-config.in for setting 1608 prefix variable. 1609 + recent change to ncurses-config.in did not take into account the 1610 --disable-overwrite option, which sets $includedir to the 1611 subdirectory and using just that for a -I option does not work - fix 1612 (report by Frederic L W Meunier). 1613 1614 20090104 1615 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 1616 deciding whether to add $LIBS to --libs output; that should be shown 1617 for the ncurses and tinfo libraries without taking rpath into 1618 account. 1619 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 1620 used in static libraries (report by Marty Jack). 1621 1622 20090103 1623 + add a configure-time check to pick a suitable value for 1624 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 1625 + add configure --with-pkg-config and --enable-pc-files options, along 1626 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 1627 for pkg-config (request by Jan Engelhardt). 1628 + use $includedir symbol in misc/ncurses-config.in, add --includedir 1629 option. 1630 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 1631 configure check to detect whether a "-" is needed before "ar" 1632 options. 1633 + update config.guess, config.sub from 1634 1635 1636 20081227 1637 + modify mk-1st.awk to work with extra categories for tinfo library. 1638 + modify configure script to allow building shared libraries with gcc 1639 on AIX 5 or 6 (adapted from patch by Lital Natan). 1640 1641 20081220 1642 + modify to omit the opaque-functions from lib_gen.o when 1643 --disable-ext-funcs is used. 1644 + add test/clip_printw.c to illustrate how to use printw without 1645 wrapping. 1646 + modify ncurses 'F' test to demo wborder_set() with colored lines. 1647 + modify ncurses 'f' test to demo wborder() with colored lines. 1648 1649 20081213 1650 + add check for failure to open hashed-database needed for db4.6 1651 (GenToo #245370). 1652 + corrected --without-manpages option; previous change only suppressed 1653 the auxiliary rules install.man and uninstall.man 1654 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 1655 GenToo #250454). 1656 + fixes from NetBSD port at 1657 1658 patch-ac (build-fix for DragonFly) 1659 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 1660 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 1661 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 1662 search-lists. 1663 + correct title string for keybound manpage (patch by Frederic Culot, 1664 OpenBSD documentation/6019), 1665 1666 20081206 1667 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 1668 work for progs/clear, progs/tabs, etc. 1669 + correct buffer-size after internal resizing of wide-character 1670 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 1671 + add "-i" option to test/filter.c to tell it to use initscr() rather 1672 than newterm(), to investigate report on comp.unix.programmer that 1673 ncurses would clear the screen in that case (it does not - the issue 1674 was xterm's alternate screen feature). 1675 + add check in mouse-driver to disable connection if GPM returns a 1676 zero, indicating that the connection is closed (Debian #506717, 1677 adapted from patch by Samuel Thibault). 1678 1679 20081129 1680 + improve a workaround in adding wide-characters, when a control 1681 character is found. The library (cf: 20040207) uses unctrl() to 1682 obtain a printable version of the control character, but was not 1683 passing color or video attributes. 1684 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 1685 display meta-characters. 1686 + turn on _XOPEN_CURSES definition in curses.h 1687 + add eterm-color entry (report by Vincent Lefevre) -TD 1688 + correct use of key_name() in test/ncurses.c 'A' test, which only 1689 displays wide-characters, not key-codes since 20070612 (report by 1690 Ricardo Cantu). 1691 1692 20081122 1693 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 1694 Ada95 (patch by Juergen Pfeifer). 1695 + document in TO-DO an issue with Cygwin's package for GNAT (report 1696 by Mike Dennison). 1697 + improve error-checking of command-line options in "tabs" program. 1698 1699 20081115 1700 + change several terminfo entries to make consistent use of ANSI 1701 clear-all-tabs -TD 1702 + add "tabs" program (prompted by Debian #502260). 1703 + add configure --without-manpages option (request by Mike Frysinger). 1704 1705 20081102 5.7 release for upload to 1706 1707 20081025 1708 + add a manpage to discuss memory leaks. 1709 + add support for shared libraries for QNX (other than libtool, which 1710 does not work well on that platform). 1711 + build-fix for QNX C++ binding. 1712 1713 20081018 1714 + build-fixes for OS/2 EMX. 1715 + modify form library to accept control characters such as newline 1716 in set_field_buffer(), which is compatible with Solaris (report by 1717 Nit Khair). 1718 + modify configure script to assume --without-hashed-db when 1719 --disable-database is used. 1720 + add "-e" option in ncurses/Makefile.in when generating source-files 1721 to force earlier exit if the build environment fails unexpectedly 1722 (prompted by patch by Adrian Bunk). 1723 + change configure script to use CF_UTF8_LIB, improved variant of 1724 CF_LIBUTF8. 1725 1726 20081012 1727 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 1728 original to teraterm2.3 -TD 1729 + update "gnome" terminfo to 2.22.3 -TD 1730 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 1731 + add "aterm" terminfo -TD 1732 + add "linux2.6.26" terminfo -TD 1733 + add logic to tic for cancelling strings in user-defined capabilities, 1734 overlooked til now. 1735 1736 20081011 1737 + regenerated html documentation. 1738 + add -m and -s options to test/keynames.c and test/key_names.c to test 1739 the meta() function with keyname() or key_name(), respectively. 1740 + correct return value of key_name() on error; it is null. 1741 + document some unresolved issues for rpath and pthreads in TO-DO. 1742 + fix a missing prototype for ioctl() on OpenBSD in tset.c 1743 + add configure option --disable-tic-depends to make explicit whether 1744 tic library depends on ncurses/ncursesw library, amends change from 1745 20080823 (prompted by Debian #501421). 1746 1747 20081004 1748 + some build-fixes for configure --disable-ext-funcs (incomplete, but 1749 works for C/C++ parts). 1750 + improve configure-check for awks unable to handle large strings, e.g. 1751 AIX 5.1 whose awk silently gives up on large printf's. 1752 1753 20080927 1754 + fix build for --with-dmalloc by workaround for redefinition of 1755 strndup between string.h and dmalloc.h 1756 + fix build for --disable-sigwinch 1757 + add environment variable NCURSES_GPM_TERMS to allow override to use 1758 GPM on terminals other than "linux", etc. 1759 + disable GPM mouse support when $TERM does not happen to contain 1760 "linux", since Gpm_Open() no longer limits its assertion to terminals 1761 that it might handle, e.g., within "screen" in xterm. 1762 + reset mouse file-descriptor when unloading GPM library (report by 1763 Miroslav Lichvar). 1764 + fix build for --disable-leaks --enable-widec --with-termlib 1765 > patch by Juergen Pfeifer: 1766 + use improved initialization for soft-label keys in Ada95 sample code. 1767 + discard internal symbol _nc_slk_format (unused since 20080112). 1768 + move call of slk_paint_info() from _nc_slk_initialize() to 1769 slk_intern_refresh(), improving initialization. 1770 1771 20080925 1772 + fix bug in mouse code for GPM from 20080920 changes (reported in 1773 Debian #500103, also Miroslav Lichvar). 1774 1775 20080920 1776 + fix shared-library rules for cygwin with tic- and tinfo-libraries. 1777 + fix a memory leak when failure to connect to GPM. 1778 + correct check for notimeout() in wgetch() (report on linux.redhat 1779 newsgroup by FurtiveBertie). 1780 + add an example warning-suppression file for valgrind, 1781 misc/ncurses.supp (based on example from Reuben Thomas) 1782 1783 20080913 1784 + change shared-library configuration for OpenBSD, make rpath work. 1785 + build-fixes for using libutf8, e.g., on OpenBSD 3.7 1786 1787 20080907 1788 + corrected fix for --enable-weak-symbols (report by Frederic L W 1789 Meunier). 1790 1791 20080906 1792 + corrected gcc options for building shared libraries on IRIX64. 1793 + add configure check for awk programs unable to handle big-strings, 1794 use that to improve the default for --enable-big-strings option. 1795 + makefile-fixes for --enable-weak-symbols (report by Frederic L W 1796 Meunier). 1797 + update test/configure script. 1798 + adapt ifdef's from library to make test/view.c build when mbrtowc() 1799 is unavailable, e.g., with HPUX 10.20. 1800 + add configure check for wcsrtombs, mbsrtowcs, which are used in 1801 test/ncurses.c, and use wcstombs, mbstowcs instead if available, 1802 fixing build of ncursew for HPUX 11.00 1803 1804 20080830 1805 + fixes to make Ada95 demo_panels() example work. 1806 + modify Ada95 'rain' test program to accept keyboard commands like the 1807 C-version. 1808 + modify BeOS-specific ifdef's to build on Haiku (patch by Scott 1809 Mccreary). 1810 + add configure-check to see if the std namespace is legal for cerr 1811 and endl, to fix a build issue with Tru64. 1812 + consistently use NCURSES_BOOL in lib_gen.c 1813 + filter #line's from lib_gen.c 1814 + change delimiter in MKlib_gen.sh from '%' to '@', to avoid 1815 substitution by IBM xlc to '#' as part of its extensions to digraphs. 1816 + update config.guess, config.sub from 1817 1818 (caveat - its maintainer removed support for older Linux systems). 1819 1820 20080823 1821 + modify configure check for pthread library to work with OSF/1 5.1, 1822 which uses #define's to associate its header and library. 1823 + use pthread_mutexattr_init() for initializing pthread_mutexattr_t, 1824 makes threaded code work on HPUX 11.23 1825 + fix a bug in demo_menus in freeing menus (cf: 20080804). 1826 + modify configure script for the case where tic library is used (and 1827 possibly renamed) to remove its dependency upon ncurses/ncursew 1828 library (patch by Dr Werner Fink). 1829 + correct manpage for menu_fore() which gave wrong default for 1830 the attribute used to display a selected entry (report by Mike Gran). 1831 + add Eterm-256color, Eterm-88color and rxvt-88color (prompted by 1832 Debian #495815) -TD 1833 1834 20080816 1835 + add configure option --enable-weak-symbols to turn on new feature. 1836 + add configure-check for availability of weak symbols. 1837 + modify linkage with pthread library to use weak symbols so that 1838 applications not linked to that library will not use the mutexes, 1839 etc. This relies on gcc, and may be platform-specific (patch by Dr 1840 Werner Fink). 1841 + add note to INSTALL to document limitation of renaming of tic library 1842 using the --with-ticlib configure option (report by Dr Werner Fink). 1843 + document (in manpage) why tputs does not detect I/O errors (prompted 1844 by comments by Samuel Thibault). 1845 + fix remaining warnings from Klocwork report. 1846 1847 20080804 1848 + modify _nc_panelhook() data to account for a permanent memory leak. 1849 + fix memory leaks in test/demo_menus 1850 + fix most warnings from Klocwork tool (report by Larry Zhou). 1851 + modify configure script CF_XOPEN_SOURCE macro to add case for 1852 "dragonfly" from xterm #236 changes. 1853 + modify configure script --with-hashed-db to let $LIBS override the 1854 search for the db library (prompted by report by Samson Pierre). 1855 1856 20080726 1857 + build-fixes for gcc 4.3.1 (changes to gnat "warnings", and C inlining 1858 thresholds). 1859 1860 20080713 1861 + build-fix (reports by Christian Ebert, Funda Wang). 1862 1863 20080712 1864 + compiler-warning fixes for Solaris. 1865 1866 20080705 1867 + use NCURSES_MOUSE_MASK() in definition of BUTTON_RELEASE(), etc., to 1868 make those work properly with the "--enable-ext-mouse" configuration 1869 (cf: 20050205). 1870 + improve documentation of build-cc options in INSTALL. 1871 + work-around a bug in gcc 4.2.4 on AIX, which does not pass the 1872 -static/-dynamic flags properly to linker, causing test/bs to 1873 not link. 1874 1875 20080628 1876 + correct some ifdef's needed for the broken-linker configuration. 1877 + make debugging library's $BAUDRATE feature work for termcap 1878 interface. 1879 + make $NCURSES_NO_PADDING feature work for termcap interface (prompted 1880 by comment on FreeBSD mailing list). 1881 + add screen.mlterm terminfo entry -TD 1882 + improve mlterm and mlterm+pcfkeys terminfo entries -TD 1883 1884 20080621 1885 + regenerated html documentation. 1886 + expand manpage description of parameters for form_driver() and 1887 menu_driver() (prompted by discussion with Adam Spragg). 1888 + add null-pointer checks for cur_term in baudrate() and 1889 def_shell_mode(), def_prog_mode() 1890 + fix some memory leaks in delscreen() and wide acs. 1891 1892 20080614 1893 + modify test/ditto.c to illustrate multi-threaded use_screen(). 1894 + change CC_SHARED_OPTS from -KPIC to -xcode=pic32 for Solaris. 1895 + add "-shared" option to MK_SHARED_LIB for gcc on Solaris (report 1896 by Poor Yorick). 1897 1898 20080607 1899 + finish changes to wgetch(), making it switch as needed to the 1900 window's actual screen when calling wrefresh() and wgetnstr(). That 1901 allows wgetch() to get used concurrently in different threads with 1902 some minor restrictions, e.g., the application should not delete a 1903 window which is being used in a wgetch(). 1904 + simplify mutex's, combining the window- and screen-mutex's. 1905 1906 20080531 1907 + modify wgetch() to use the screen which corresponds to its window 1908 parameter rather than relying on SP; some dependent functions still 1909 use SP internally. 1910 + factor out most use of SP in lib_mouse.c, using parameter. 1911 + add internal _nc_keyname(), replacing keyname() to associate with a 1912 particular SCREEN rather than the global SP. 1913 + add internal _nc_unctrl(), replacing unctrl() to associate with a 1914 particular SCREEN rather than the global SP. 1915 + add internal _nc_tracemouse(), replacing _tracemouse() to eliminate 1916 its associated global buffer _nc_globals.tracemse_buf now in SCREEN. 1917 + add internal _nc_tracechar(), replacing _tracechar() to use SCREEN in 1918 preference to the global _nc_globals.tracechr_buf buffer. 1919 1920 20080524 1921 + modify _nc_keypad() to make it switch temporarily as needed to the 1922 screen which must be updated. 1923 + wrap cur_term variable to help make _nc_keymap() thread-safe, and 1924 always set the screen's copy of this variable in set_curterm(). 1925 + restore curs_set() state after endwin()/refresh() (report/patch 1926 Miroslav Lichvar) 1927 1928 20080517 1929 + modify configure script to note that --enable-ext-colors and 1930 --enable-ext-mouse are not experimental, but extensions from 1931 the ncurses ABI 5. 1932 + corrected manpage description of setcchar() (discussion with 1933 Emanuele Giaquinta). 1934 + fix for adding a non-spacing character at the beginning of a line 1935 (report/patch by Miroslav Lichvar). 1936 1937 20080503 1938 + modify screen.* terminfo entries using new screen+fkeys to fix 1939 overridden keys in screen.rxvt (Debian #478094) -TD 1940 + modify internal interfaces to reduce wgetch()'s dependency on the 1941 global SP. 1942 + simplify some loops with macros each_screen(), each_window() and 1943 each_ripoff(). 1944 1945 20080426 1946 + continue modifying test/ditto.c toward making it demonstrate 1947 multithreaded use_screen(), using fifos to pass data between screens. 1948 + fix typo in form.3x (report by Mike Gran). 1949 1950 20080419 1951 + add screen.rxvt terminfo entry -TD 1952 + modify tic -f option to format spaces as \s to prevent them from 1953 being lost when that is read back in unformatted strings. 1954 + improve test/ditto.c, using a "talk"-style layout. 1955 1956 20080412 1957 + change test/ditto.c to use openpty() and xterm. 1958 + add locks for copywin(), dupwin(), overlap(), overlay() on their 1959 window parameters. 1960 + add locks for initscr() and newterm() on updates to the SCREEN 1961 pointer. 1962 + finish table in curs_thread.3x manpage. 1963 1964 20080405 1965 + begin table in curs_thread.3x manpage describing the scope of data 1966 used by each function (or symbol) for threading analysis. 1967 + add null-pointer checks to setsyx() and getsyx() (prompted by 1968 discussion by Martin v. Lowis and Jeroen Ruigrok van der Werven on 1969 python-dev2 mailing list). 1970 1971 20080329 1972 + add null-pointer checks in set_term() and delscreen(). 1973 + move _nc_windows into _nc_globals, since windows can be pads, which 1974 are not associated with a particular screen. 1975 + change use_screen() to pass the SCREEN* parameter rather than 1976 stdscr to the callback function. 1977 + force libtool to use tag for 'CC' in case it does not detect this, 1978 e.g., on aix when using CC=powerpc-ibm-aix5.3.0.0-gcc 1979 (report/patch by Michael Haubenwallner). 1980 + override OBJEXT to "lo" when building with libtool, to work on 1981 platforms such as AIX where libtool may use a different suffix for 1982 the object files than ".o" (report/patch by Michael Haubenwallner). 1983 + add configure --with-pthread option, for building with the POSIX 1984 thread library. 1985 1986 20080322 1987 + fill in extended-color pair two more places in wbkgrndset() and 1988 waddch_nosync() (prompted by Sedeno's patch). 1989 + fill in extended-color pair in _nc_build_wch() to make colors work 1990 for wide-characters using extended-colors (patch by Alejandro R 1991 Sedeno). 1992 + add x/X toggles to ncurses.c C color test to test/demo 1993 wide-characters with extended-colors. 1994 + add a/A toggles to ncurses.c c/C color tests. 1995 + modify test/ditto.c to use use_screen(). 1996 + finish modifying test/rain.c to demonstrate threads. 1997 1998 20080308 1999 + start modifying test/rain.c for threading demo. 2000 + modify test/ncurses.c to make 'f' test accept the f/F/b/F/</> toggles 2001 that the 'F' accepts. 2002 + modify test/worm.c to show trail in reverse-video when other threads 2003 are working concurrently. 2004 + fix a deadlock from improper nesting of mutexes for windowlist and 2005 window. 2006 2007 20080301 2008 + fixes from 20080223 resolved issue with mutexes; change to use 2009 recursive mutexes to fix memory leak in delwin() as called from 2010 _nc_free_and_exit(). 2011 2012 20080223 2013 + fix a size-difference in _nc_globals which caused hanging of mutex 2014 lock/unlock when termlib was built separately. 2015 2016 20080216 2017 + avoid using nanosleep() in threaded configuration since that often 2018 is implemented to suspend the entire process. 2019 2020 20080209 2021 + update test programs to build/work with various UNIX curses for 2022 comparisons. This was to reinvestigate statement in X/Open curses 2023 that insnstr and winsnstr perform wrapping. None of the Unix-branded 2024 implementations do this, as noted in manpage (cf: 20040228). 2025 2026 20080203 2027 + modify _nc_setupscreen() to set the legacy-coding value the same 2028 for both narrow/wide models. It had been set only for wide model, 2029 but is needed to make unctrl() work with locale in the narrow model. 2030 + improve waddch() and winsch() handling of EILSEQ from mbrtowc() by 2031 using unctrl() to display illegal bytes rather than trying to append 2032 further bytes to make up a valid sequence (reported by Andrey A 2033 Chernov). 2034 + modify unctrl() to check codes in 128-255 range versus isprint(). 2035 If they are not printable, and locale was set, use a "M-" or "~" 2036 sequence. 2037 2038 20080126 2039 + improve threading in test/worm.c (wrap refresh calls, and KEY_RESIZE 2040 handling). Now it hangs in napms(), no matter whether nanosleep() 2041 or poll() or select() are used on Linux. 2042 2043 20080119 2044 + fixes to build with --disable-ext-funcs 2045 + add manpage for use_window and use_screen. 2046 + add set_tabsize() and set_escdelay() functions. 2047 2048 20080112 2049 + remove recursive-mutex definitions, finish threading demo for worm.c 2050 + remove a redundant adjustment of lines in resizeterm.c's 2051 adjust_window() which caused occasional misadjustment of stdscr when 2052 softkeys were used. 2053 2054 20080105 2055 + several improvements to terminfo entries based on xterm #230 -TD 2056 + modify MKlib_gen.sh to handle keyname/key_name prototypes, so the 2057 "link_test" builds properly. 2058 + fix for toe command-line options -u/-U to ensure filename is given. 2059 + fix allocation-size for command-line parsing in infocmp from 20070728 2060 (report by Miroslav Lichvar) 2061 + improve resizeterm() by moving ripped-off lines, and repainting the 2062 soft-keys (report by Katarina Machalkova) 2063 + add clarification in wclear's manpage noting that the screen will be 2064 cleared even if a subwindow is cleared (prompted by Christer Enfors 2065 question). 2066 + change test/ncurses.c soft-key tests to work with KEY_RESIZE. 2067 2068 20071222 2069 + continue implementing support for threading demo by adding mutex 2070 for delwin(). 2071 2072 20071215 2073 + add several functions to C++ binding which wrap C functions that 2074 pass a WINDOW* parameter (request by Chris Lee). 2075 2076 20071201 2077 + add note about configure options needed for Berkeley database to the 2078 INSTALL file. 2079 + improve checks for version of Berkeley database libraries. 2080 + amend fix for rpath to not modify LDFLAGS if the platform has no 2081 applicable transformation (report by Christian Ebert, cf: 20071124). 2082 2083 20071124 2084 + modify configure option --with-hashed-db to accept a parameter which 2085 is the install-prefix of a given Berkeley Database (prompted by 2086 pierre4d2 comments). 2087 + rewrite wrapper for wcrtomb(), making it work on Solaris. This is 2088 used in the form library to determine the length of the buffer needed 2089 by field_buffer (report by Alfred Fung). 2090 + remove unneeded window-parameter from C++ binding for wresize (report 2091 by Chris Lee). 2092 2093 20071117 2094 + modify the support for filesystems which do not support mixed-case to 2095 generate 2-character (hexadecimal) codes for the lower-level of the 2096 filesystem terminfo database (request by Michail Vidiassov). 2097 + add configure option --enable-mixed-case, to allow overriding the 2098 configure script's check if the filesystem supports mixed-case 2099 filenames. 2100 + add wresize() to C++ binding (request by Chris Lee). 2101 + define NCURSES_EXT_FUNCS and NCURSES_EXT_COLORS in curses.h to make 2102 it simpler to tell if the extended functions and/or colors are 2103 declared. 2104 2105 20071103 2106 + update memory-leak checks for changes to names.c and codes.c 2107 + correct acsc strings in h19, z100 (patch by Benjamin C W Sittler). 2108 2109 20071020 2110 + continue implementing support for threading demo by adding mutex 2111 for use_window(). 2112 + add mrxvt terminfo entry, add/fix xterm building blocks for modified 2113 cursor keys -TD 2114 + compile with FreeBSD "contemporary" TTY interface (patch by 2115 Rong-En Fan). 2116 2117 20071013 2118 + modify makefile rules to allow clear, tput and tset to be built 2119 without libtic. The other programs (infocmp, tic and toe) rely on 2120 that library. 2121 + add/modify null-pointer checks in several functions for SP and/or 2122 the WINDOW* parameter (report by Thorben Krueger). 2123 + fixes for field_buffer() in formw library (see Redhat Bugzilla 2124 #310071, patches by Miroslav Lichvar). 2125 + improve performance of NCURSES_CHAR_EQ code (patch by Miroslav 2126 Lichvar). 2127 + update/improve mlterm and rxvt terminfo entries, e.g., for 2128 the modified cursor- and keypad-keys -TD 2129 2130 20071006 2131 + add code to curses.priv.h ifdef'd with NCURSES_CHAR_EQ, which 2132 changes the CharEq() macro to an inline function to allow comparing 2133 cchar_t struct's without comparing gaps in a possibly unpacked 2134 memory layout (report by Miroslav Lichvar). 2135 2136 20070929 2137 + add new functions to lib_trace.c to setup mutex's for the _tracef() 2138 calls within the ncurses library. 2139 + for the reentrant model, move _nc_tputs_trace and _nc_outchars into 2140 the SCREEN. 2141 + start modifying test/worm.c to provide threading demo (incomplete). 2142 + separated ifdef's for some BSD-related symbols in tset.c, to make 2143 it compile on LynxOS (report by Greg Gemmer). 2144 20070915 2145 + modify Ada95/gen/Makefile to use shlib script, to simplify building 2146 shared-library configuration on platforms lacking rpath support. 2147 + build-fix for Ada95/src/Makefile to reflect changed dependency for 2148 the terminal-interface-curses-aux.adb file which is now generated. 2149 + restructuring test/worm.c, for use_window() example. 2150 2151 20070908 2152 + add use_window() and use_screen() functions, to develop into support 2153 for threaded library (incomplete). 2154 + fix typos in man/curs_opaque.3x which kept the install script from 2155 creating symbolic links to two aliases created in 20070818 (report by 2156 Rong-En Fan). 2157 2158 20070901 2159 + remove a spurious newline from output of html.m4, which caused links 2160 for Ada95 html to be incorrect for the files generated using m4. 2161 + start investigating mutex's for SCREEN manipulation (incomplete). 2162 + minor cleanup of codes.c/names.c for --enable-const 2163 + expand/revise "Routine and Argument Names" section of ncurses manpage 2164 to address report by David Givens in newsgroup discussion. 2165 + fix interaction between --without-progs/--with-termcap configure 2166 options (report by Michail Vidiassov). 2167 + fix typo in "--disable-relink" option (report by Michail Vidiassov). 2168 2169 20070825 2170 + fix a sign-extension bug in infocmp's repair_acsc() function 2171 (cf: 971004). 2172 + fix old configure script bug which prevented "--disable-warnings" 2173 option from working (patch by Mike Frysinger). 2174 2175 20070818 2176 + add 9term terminal description (request by Juhapekka Tolvanen) -TD 2177 + modify comp_hash.c's string output to avoid misinterpreting a null 2178 "\0" followed by a digit. 2179 + modify MKnames.awk and MKcodes.awk to support big-strings. 2180 This only applies to the cases (broken linker, reentrant) where 2181 the corresponding arrays are accessed via wrapper functions. 2182 + split MKnames.awk into two scripts, eliminating the shell redirection 2183 which complicated the make process and also the bogus timestamp file 2184 which was introduced to fix "make -j". 2185 + add test/test_opaque.c, test/test_arrays.c 2186 + add wgetscrreg() and wgetparent() for applications that may need it 2187 when NCURSES_OPAQUE is defined (prompted by Bryan Christ). 2188 2189 20070812 2190 + amend treatment of infocmp "-r" option to retain the 1023-byte limit 2191 unless "-T" is given (cf: 981017). 2192 + modify comp_captab.c generation to use big-strings. 2193 + make _nc_capalias_table and _nc_infoalias_table private accessed via 2194 _nc_get_alias_table() since the tables are used only within the tic 2195 library. 2196 + modify configure script to skip Intel compiler in CF_C_INLINE. 2197 + make _nc_info_hash_table and _nc_cap_hash_table private accessed via 2198 _nc_get_hash_table() since the tables are used only within the tic 2199 library. 2200 2201 20070728 2202 + make _nc_capalias_table and _nc_infoalias_table private, accessed via 2203 _nc_get_alias_table() since they are used only by parse_entry.c 2204 + make _nc_key_names private since it is used only by lib_keyname.c 2205 + add --disable-big-strings configure option to control whether 2206 unctrl.c is generated using the big-string optimization - which may 2207 use strings longer than supported by a given compiler. 2208 + reduce relocation tables for tic, infocmp by changing type of 2209 internal hash tables to short, and make those private symbols. 2210 + eliminate large fixed arrays from progs/infocmp.c 2211 2212 20070721 2213 + change winnstr() to stop at the end of the line (cf: 970315). 2214 + add test/test_get_wstr.c 2215 + add test/test_getstr.c 2216 + add test/test_inwstr.c 2217 + add test/test_instr.c 2218 2219 20070716 2220 + restore a call to obtain screen-size in _nc_setupterm(), which 2221 is used in tput and other non-screen applications via setupterm() 2222 (Debian #433357, reported by Florent Bayle, Christian Ohm, 2223 cf: 20070310). 2224 2225 20070714 2226 + add test/savescreen.c test-program 2227 + add check to trace-file open, if the given name is a directory, add 2228 ".log" to the name and try again. 2229 + add konsole-256color entry -TD 2230 + add extra gcc warning options from xterm. 2231 + minor fixes for ncurses/hashmap test-program. 2232 + modify configure script to quiet c++ build with libtool when the 2233 --disable-echo option is used. 2234 + modify configure script to disable ada95 if libtool is selected, 2235 writing a warning message (addresses FreeBSD ports/114493). 2236 + update config.guess, config.sub 2237 2238 20070707 2239 + add continuous-move "M" to demo_panels to help test refresh changes. 2240 + improve fix for refresh of window on top of multi-column characters, 2241 taking into account some split characters on left/right window 2242 boundaries. 2243 2244 20070630 2245 + add "widec" row to _tracedump() output to help diagnose remaining 2246 problems with multi-column characters. 2247 + partial fix for refresh of window on top of multi-column characters 2248 which are partly overwritten (report by Sadrul H Chowdhury). 2249 + ignore A_CHARTEXT bits in vidattr() and vid_attr(), in case 2250 multi-column extension bits are passed there. 2251 + add setlocale() call to demo_panels.c, needed for wide-characters. 2252 + add some output flags to _nc_trace_ttymode to help diagnose a bug 2253 report by Larry Virden, i.e., ONLCR, OCRNL, ONOCR and ONLRET, 2254 2255 20070623 2256 + add test/demo_panels.c 2257 + implement opaque version of setsyx() and getsyx(). 2258 2259 20070612 2260 + corrected xterm+pcf2 terminfo modifiers for F1-F4, to match xterm 2261 #226 -TD 2262 + split-out key_name() from MKkeyname.awk since it now depends upon 2263 wunctrl() which is not in libtinfo (report by Rong-En Fan). 2264 2265 20070609 2266 + add test/key_name.c 2267 + add stdscr cases to test/inchs.c and test/inch_wide.c 2268 + update test/configure 2269 + correct formatting of DEL (0x7f) in _nc_vischar(). 2270 + null-terminate result of wunctrl(). 2271 + add null-pointer check in key_name() (report by Andreas Krennmair, 2272 cf: 20020901). 2273 2274 20070602 2275 + adapt mouse-handling code from menu library in form-library 2276 (discussion with Clive Nicolson). 2277 + add a modification of test/dots.c, i.e., test/dots_mvcur.c to 2278 illustrate how to use mvcur(). 2279 + modify wide-character flavor of SetAttr() to preserve the 2280 WidecExt() value stored in the .attr field, e.g., in case it 2281 is overwritten by chgat (report by Aleksi Torhamo). 2282 + correct buffer-size for _nc_viswbuf2n() (report by Aleksi Torhamo). 2283 + build-fixes for Solaris 2.6 and 2.7 (patch by Peter O'Gorman). 2284 2285 20070526 2286 + modify keyname() to use "^X" form only if meta() has been called, or 2287 if keyname() is called without initializing curses, e.g., via 2288 initscr() or newterm() (prompted by LinuxBase #1604). 2289 + document some portability issues in man/curs_util.3x 2290 + add a shadow copy of TTY buffer to _nc_prescreen to fix applications 2291 broken by moving that data into SCREEN (cf: 20061230). 2292 2293 20070512 2294 + add 'O' (wide-character panel test) in ncurses.c to demonstrate a 2295 problem reported by Sadrul H Chowdhury with repainting parts of 2296 a fullwidth cell. 2297 + modify slk_init() so that if there are preceding calls to 2298 ripoffline(), those affect the available lines for soft-keys (adapted 2299 from patch by Clive Nicolson). 2300 + document some portability issues in man/curs_getyx.3x 2301 2302 20070505 2303 + fix a bug in Ada95/samples/ncurses which caused a variable to 2304 become uninitialized in the "b" test. 2305 + fix Ada95/gen/Makefile.in adahtml rule to account for recent 2306 movement of files, fix a few incorrect manpage references in the 2307 generated html. 2308 + add Ada95 binding to _nc_freeall() as Curses_Free_All to help with 2309 memory-checking. 2310 + correct some functions in Ada95 binding which were using return value 2311 from C where none was returned: idcok(), immedok() and wtimeout(). 2312 + amend recent changes for Ada95 binding to make it build with 2313 Cygwin's linker, e.g., with configure options 2314 --enable-broken-linker --with-ticlib 2315 2316 20070428 2317 + add a configure check for gcc's options for inlining, use that to 2318 quiet a warning message where gcc's default behavior changed from 2319 3.x to 4.x. 2320 + improve warning message when checking if GPM is linked to curses 2321 library by not warning if its use of "wgetch" is via a weak symbol. 2322 + add loader options when building with static libraries to ensure that 2323 an installed shared library for ncurses does not conflict. This is 2324 reported as problem with Tru64, but could affect other platforms 2325 (report Martin Mokrejs, analysis by Tim Mooney). 2326 + fix build on cygwin after recent ticlib/termlib changes, i.e., 2327 + adjust TINFO_SUFFIX value to work with cygwin's dll naming 2328 + revert a change from 20070303 which commented out dependency of 2329 SHLIB_LIST in form/menu/panel/c++ libraries. 2330 + fix initialization of ripoff stack pointer (cf: 20070421). 2331 2332 20070421 2333 + move most static variables into structures _nc_globals and 2334 _nc_prescreen, to simplify storage. 2335 + add/use configure script macro CF_SIG_ATOMIC_T, use the corresponding 2336 type for data manipulated by signal handlers (prompted by comments 2337 in mailing.openbsd.bugs newsgroup). 2338 + modify CF_WITH_LIBTOOL to allow one to pass options such as -static 2339 to the libtool create- and link-operations. 2340 2341 20070414 2342 + fix whitespace in curs_opaque.3x which caused a spurious ';' in 2343 the installed aliases (report by Peter Santoro). 2344 + fix configure script to not try to generate adacurses-config when 2345 Ada95 tree is not built. 2346 2347 20070407 2348 + add man/curs_legacy.3x, man/curs_opaque.3x 2349 + fix acs_map binding for Ada95 when --enable-reentrant is used. 2350 + add adacurses-config to the Ada95 install, based on version from 2351 FreeBSD port, in turn by Juergen Pfeifer in 2000 (prompted by 2352 comment on comp.lang.ada newsgroup). 2353 + fix includes in c++ binding to build with Intel compiler 2354 (cf: 20061209). 2355 + update install rule in Ada95 to use mkdirs.sh 2356 > other fixes prompted by inspection for Coverity report: 2357 + modify ifdef's for c++ binding to use try/catch/throw statements 2358 + add a null-pointer check in tack/ansi.c request_cfss() 2359 + fix a memory leak in ncurses/base/wresize.c 2360 + corrected check for valid memu/meml capabilities in 2361 progs/dump_entry.c when handling V_HPUX case. 2362 > fixes based on Coverity report: 2363 + remove dead code in test/bs.c 2364 + remove dead code in test/demo_defkey.c 2365 + remove an unused assignment in progs/infocmp.c 2366 + fix a limit check in tack/ansi.c tools_charset() 2367 + fix tack/ansi.c tools_status() to perform the VT320/VT420 2368 tests in request_cfss(). The function had exited too soon. 2369 + fix a memory leak in tic.c's make_namelist() 2370 + fix a couple of places in tack/output.c which did not check for EOF. 2371 + fix a loop-condition in test/bs.c 2372 + add index checks in lib_color.c for color palettes 2373 + add index checks in progs/dump_entry.c for version_filter() handling 2374 of V_BSD case. 2375 + fix a possible null-pointer dereference in copywin() 2376 + fix a possible null-pointer dereference in waddchnstr() 2377 + add a null-pointer check in _nc_expand_try() 2378 + add a null-pointer check in tic.c's make_namelist() 2379 + add a null-pointer check in _nc_expand_try() 2380 + add null-pointer checks in test/cardfile.c 2381 + fix a double-free in ncurses/tinfo/trim_sgr0.c 2382 + fix a double-free in ncurses/base/wresize.c 2383 + add try/catch block to c++/cursesmain.cc 2384 2385 20070331 2386 + modify Ada95 binding to build with --enable-reentrant by wrapping 2387 global variables (bug: acs_map does not yet work). 2388 + modify Ada95 binding to use the new access-functions, allowing it 2389 to build/run when NCURSES_OPAQUE is set. 2390 + add access-functions and macros to return properties of the WINDOW 2391 structure, e.g., when NCURSES_OPAQUE is set. 2392 + improved install-sh's quoting. 2393 + use mkdirs.sh rather than mkinstalldirs, e.g., to use fixes from 2394 other programs. 2395 2396 20070324 2397 + eliminate part of the direct use of WINDOW data from Ada95 interface. 2398 + fix substitutions for termlib filename to make configure option 2399 --enable-reentrant work with --with-termlib. 2400 + change a constructor for NCursesWindow to allow compiling with 2401 NCURSES_OPAQUE set, since we cannot pass a reference to 2402 an opaque pointer. 2403 2404 20070317 2405 + ignore --with-chtype=unsigned since unsigned is always added to 2406 the type in curses.h; do the same for --with-mmask-t. 2407 + change warning regarding --enable-ext-colors and wide-character 2408 in the configure script to an error. 2409 + tweak error message in CF_WITH_LIBTOOL to distinguish other programs 2410 such as Darwin's libtool program (report by Michail Vidiassov) 2411 + modify edit_man.sh to allow for multiple substitutions per line. 2412 + set locale in misc/ncurses-config.in since it uses a range 2413 + change permissions libncurses++.a install (report by Michail 2414 Vidiassov). 2415 + corrected length of temporary buffer in wide-character version 2416 of set_field_buffer() (related to report by Bryan Christ). 2417 2418 20070311 2419 + fix mk-1st.awk script install_shlib() function, broken in 20070224 2420 changes for cygwin (report by Michail Vidiassov). 2421 2422 20070310 2423 + increase size of array in _nc_visbuf2n() to make "tic -v" work 2424 properly in its similar_sgr() function (report/analysis by Peter 2425 Santoro). 2426 + add --enable-reentrant configure option for ongoing changes to 2427 implement a reentrant version of ncurses: 2428 + libraries are suffixed with "t" 2429 + wrap several global variables (curscr, newscr, stdscr, ttytype, 2430 COLORS, COLOR_PAIRS, COLS, ESCDELAY, LINES and TABSIZE) as 2431 functions returning values stored in SCREEN or cur_term. 2432 + move some initialization (LINES, COLS) from lib_setup.c, 2433 i.e., setupterm() to _nc_setupscreen(), i.e., newterm(). 2434 2435 20070303 2436 + regenerated html documentation. 2437 + add NCURSES_OPAQUE symbol to curses.h, will use to make structs 2438 opaque in selected configurations. 2439 + move the chunk in lib_acs.c which resets acs capabilities when 2440 running on a terminal whose locale interferes with those into 2441 _nc_setupscreen(), so the libtinfo/libtinfow files can be made 2442 identical (requested by Miroslav Lichvar). 2443 + do not use configure variable SHLIB_LIBS for building libraries 2444 outside the ncurses directory, since that symbol is customized 2445 only for that directory, and using it introduces an unneeded 2446 dependency on libdl (requested by Miroslav Lichvar). 2447 + modify mk-1st.awk so the generated makefile rules for linking or 2448 installing shared libraries do not first remove the library, in 2449 case it is in use, e.g., libncurses.so by /bin/sh (report by Jeff 2450 Chua). 2451 + revised section "Using NCURSES under XTERM" in ncurses-intro.html 2452 (prompted by newsgroup comment by Nick Guenther). 2453 2454 20070224 2455 + change internal return codes of _nc_wgetch() to check for cases 2456 where KEY_CODE_YES should be returned, e.g., if a KEY_RESIZE was 2457 ungetch'd, and read by wget_wch(). 2458 + fix static-library build broken in 20070217 changes to remove "-ldl" 2459 (report by Miroslav Lichvar). 2460 + change makefile/scripts for cygwin to allow building termlib. 2461 + use Form_Hook in manpages to match form.h 2462 + use Menu_Hook in manpages, as well as a few places in menu.h 2463 + correct form- and menu-manpages to use specific Field_Options, 2464 Menu_Options and Item_Options types. 2465 + correct prototype for _tracechar() in manpage (cf: 20011229). 2466 + correct prototype for wunctrl() in manpage. 2467 2468 20070217 2469 + fixes for $(TICS_LIST) in ncurses/Makefile (report by Miroslav 2470 Lichvar). 2471 + modify relinking of shared libraries to apply only when rpath is 2472 enabled, and add --disable-relink option which can be used to 2473 disable the feature altogether (reports by Michail Vidiassov, 2474 Adam J Richter). 2475 + fix --with-termlib option for wide-character configuration, stripping 2476 the "w" suffix in one place (report by Miroslav Lichvar). 2477 + remove "-ldl" from some library lists to reduce dependencies in 2478 programs (report by Miroslav Lichvar). 2479 + correct description of --enable-signed-char in configure --help 2480 (report by Michail Vidiassov). 2481 + add pattern for GNU/kFreeBSD configuration to CF_XOPEN_SOURCE, 2482 which matches an earlier change to CF_SHARED_OPTS, from xterm #224 2483 fixes. 2484 + remove "${DESTDIR}" from -install_name option used for linking 2485 shared libraries on Darwin (report by Michail Vidiassov). 2486 2487 20070210 2488 + add test/inchs.c, test/inch_wide.c, to test win_wchnstr(). 2489 + remove libdl from library list for termlib (report by Miroslav 2490 Lichvar). 2491 + fix configure.in to allow --without-progs --with-termlib (patch by 2492 Miroslav Lichvar). 2493 + modify win_wchnstr() to ensure that only a base cell is returned 2494 for each multi-column character (prompted by report by Wei Kong 2495 regarding change in mvwin_wch() cf: 20041023). 2496 2497 20070203 2498 + modify fix_wchnstr() in form library to strip attributes (and color) 2499 from the cchar_t array (field cells) read from a field's window. 2500 Otherwise, when copying the field cells back to the window, the 2501 associated color overrides the field's background color (report by 2502 Ricardo Cantu). 2503 + improve tracing for form library, showing created forms, fields, etc. 2504 + ignore --enable-rpath configure option if --with-shared was omitted. 2505 + add _nc_leaks_tinfo(), _nc_free_tic(), _nc_free_tinfo() entrypoints 2506 to allow leak-checking when both tic- and tinfo-libraries are built. 2507 + drop CF_CPP_VSCAN_FUNC macro from configure script, since C++ binding 2508 no longer relies on it. 2509 + disallow combining configure script options --with-ticlib and 2510 --enable-termcap (report by Rong-En Fan). 2511 + remove tack from ncurses tree. 2512 2513 20070128 2514 + fix typo in configure script that broke --with-termlib option 2515 (report by Rong-En Fan). 2516 2517 20070127 2518 + improve fix for FreeBSD gnu/98975, to allow for null pointer passed 2519 to tgetent() (report by Rong-en Fan). 2520 + update tack/HISTORY and tack/README to tell how to build it after 2521 it is removed from the ncurses tree. 2522 + fix configure check for libtool's version to trim blank lines 2523 (report by sci-fi@hush.ai). 2524 + review/eliminate other original-file artifacts in cursesw.cc, making 2525 its license consistent with ncurses. 2526 + use ncurses vw_scanw() rather than reading into a fixed buffer in 2527 the c++ binding for scanw() methods (prompted by report by Nuno Dias). 2528 + eliminate fixed-buffer vsprintf() calls in c++ binding. 2529 2530 20070120 2531 + add _nc_leaks_tic() to separate leak-checking of tic library from 2532 term/ncurses libraries, and thereby eliminate a library dependency. 2533 + fix test/mk-test.awk to ignore blank lines. 2534 + correct paths in include/headers, for --srcdir (patch by Miroslav 2535 Lichvar). 2536 2537 20070113 2538 + add a break-statement in misc/shlib to ensure that it exits on the 2539 _first_ matched directory (report by Paul Novak). 2540 + add tack/configure, which can be used to build tack outside the 2541 ncurses build-tree. 2542 + add --with-ticlib option, to build/install the tic-support functions 2543 in a separate library (suggested by Miroslav Lichvar). 2544 2545 20070106 2546 + change MKunctrl.awk to reduce relocation table for unctrl.o 2547 + change MKkeyname.awk to reduce relocation table for keyname.o 2548 (patch by Miroslav Lichvar). 2549 2550 20061230 2551 + modify configure check for libtool's version to trim blank lines 2552 (report by sci-fi@hush.ai). 2553 + modify some modules to allow them to be reentrant if _REENTRANT is 2554 defined: lib_baudrate.c, resizeterm.c (local data only) 2555 + eliminate static data from some modules: add_tries.c, hardscroll.c, 2556 lib_ttyflags.c, lib_twait.c 2557 + improve manpage install to add aliases for the transformed program 2558 names, e.g., from --program-prefix. 2559 + used linklint to verify links in the HTML documentation, made fixes 2560 to manpages as needed. 2561 + fix a typo in curs_mouse.3x (report by William McBrine). 2562 + fix install-rule for ncurses5-config to make the bin-directory. 2563 2564 20061223 2565 + modify configure script to omit the tic (terminfo compiler) support 2566 from ncurses library if --without-progs option is given. 2567 + modify install rule for ncurses5-config to do this via "install.libs" 2568 + modify shared-library rules to allow FreeBSD 3.x to use rpath. 2569 + update config.guess, config.sub 2570 2571 20061217 5.6 release for upload to 2572 2573 20061217 2574 + add ifdef's for <wctype.h> for HPUX, which has the corresponding 2575 definitions in <wchar.h>. 2576 + revert the va_copy() change from 20061202, since it was neither 2577 correct nor portable. 2578 + add $(LOCAL_LIBS) definition to progs/Makefile.in, needed for 2579 rpath on Solaris. 2580 + ignore wide-acs line-drawing characters that wcwidth() claims are 2581 not one-column. This is a workaround for Solaris' broken locale 2582 support. 2583 2584 20061216 2585 + modify configure --with-gpm option to allow it to accept a parameter, 2586 i.e., the name of the dynamic GPM library to load via dlopen() 2587 (requested by Bryan Henderson). 2588 + add configure option --with-valgrind, changes from vile. 2589 + modify configure script AC_TRY_RUN and AC_TRY_LINK checks to use 2590 'return' in preference to 'exit()'. 2591 2592 20061209 2593 + change default for --with-develop back to "no". 2594 + add XTABS to tracing of TTY bits. 2595 + updated autoconf patch to ifdef-out the misfeature which declares 2596 exit() for configure tests. This fixes a redefinition warning on 2597 Solaris. 2598 + use ${CC} rather than ${LD} in shared library rules for IRIX64, 2599 Solaris to help ensure that initialization sections are provided for 2600 extra linkage requirements, e.g., of C++ applications (prompted by 2601 comment by Casper Dik in newsgroup). 2602 + rename "$target" in CF_MAN_PAGES to make it easier to distinguish 2603 from the autoconf predefined symbol. There was no conflict, 2604 since "$target" was used only in the generated edit_man.sh file, 2605 but SuSE's rpm package contains a patch. 2606 2607 20061202 2608 + update man/term.5 to reflect extended terminfo support and hashed 2609 database configuration. 2610 + updates for test/configure script. 2611 + adapted from SuSE rpm package: 2612 + remove long-obsolete workaround for broken-linker which declared 2613 cur_term in tic.c 2614 + improve error recovery in PUTC() macro when wcrtomb() does not 2615 return usable results for an 8-bit character. 2616 + patches from rpm package (SuSE): 2617 + use va_copy() in extra varargs manipulation for tracing version 2618 of printw, etc. 2619 + use a va_list rather than a null in _nc_freeall()'s call to 2620 _nc_printf_string(). 2621 + add some see-also references in manpages to show related 2622 wide-character functions (suggested by Claus Fischer). 2623 2624 20061125 2625 + add a check in lib_color.c to ensure caller does not increase COLORS 2626 above max_colors, which is used as an array index (discussion with 2627 Simon Sasburg). 2628 + add ifdef's allowing ncurses to be built with tparm() using either 2629 varargs (the existing status), or using a fixed-parameter list (to 2630 match X/Open). 2631 2632 20061104 2633 + fix redrawing of windows other than stdscr using wredrawln() by 2634 touching the corresponding rows in curscr (discussion with Dan 2635 Gookin). 2636 + add test/redraw.c 2637 + add test/echochar.c 2638 + review/cleanup manpage descriptions of error-returns for form- and 2639 menu-libraries (prompted by FreeBSD docs/46196). 2640 2641 20061028 2642 + add AUTHORS file -TD 2643 + omit the -D options from output of the new config script --cflags 2644 option (suggested by Ralf S Engelschall). 2645 + make NCURSES_INLINE unconditionally defined in curses.h 2646 2647 20061021 2648 + revert change to accommodate bash 3.2, since that breaks other 2649 platforms, e.g., Solaris. 2650 + minor fixes to NEWS file to simplify scripting to obtain list of 2651 contributors. 2652 + improve some shared-library configure scripting for Linux, FreeBSD 2653 and NetBSD to make "--with-shlib-version" work. 2654 + change configure-script rules for FreeBSD shared libraries to allow 2655 for rpath support in versions past 3. 2656 + use $(DESTDIR) in makefile rules for installing/uninstalling the 2657 package config script (reports/patches by Christian Wiese, 2658 Ralf S Engelschall). 2659 + fix a warning in the configure script for NetBSD 2.0, working around 2660 spurious blanks embedded in its ${MAKEFLAGS} symbol. 2661 + change test/Makefile to simplify installing test programs in a 2662 different directory when --enable-rpath is used. 2663 2664 20061014 2665 + work around bug in bash 3.2 by adding extra quotes (Jim Gifford). 2666 + add/install a package config script, e.g., "ncurses5-config" or 2667 "ncursesw5-config", according to configuration options. 2668 2669 20061007 2670 + add several GNU Screen terminfo variations with 16- and 256-colors, 2671 and status line (Alain Bench). 2672 + change the way shared libraries (other than libtool) are installed. 2673 Rather than copying the build-tree's libraries, link the shared 2674 objects into the install directory. This makes the --with-rpath 2675 option work except with $(DESTDIR) (cf: 20000930). 2676 2677 20060930 2678 + fix ifdef in c++/internal.h for QNX 6.1 2679 + test-compiled with (old) egcs-1.1.2, modified configure script to 2680 not unset the $CXX and related variables which would prevent this. 2681 + fix a few terminfo.src typos exposed by improvments to "-f" option. 2682 + improve infocmp/tic "-f" option formatting. 2683 2684 20060923 2685 + make --disable-largefile option work (report by Thomas M Ott). 2686 + updated html documentation. 2687 + add ka2, kb1, kb3, kc2 to vt220-keypad as an extension -TD 2688 + minor improvements to rxvt+pcfkeys -TD 2689 2690 20060916 2691 + move static data from lib_mouse.c into SCREEN struct. 2692 + improve ifdef's for _POSIX_VDISABLE in tset to work with Mac OS X 2693 (report by Michail Vidiassov). 2694 + modify CF_PATH_SYNTAX to ensure it uses the result from --prefix 2695 option (from lynx changes) -TD 2696 + adapt AC_PROG_EGREP check, noting that this is likely to be another 2697 place aggravated by POSIXLY_CORRECT. 2698 + modify configure check for awk to ensure that it is found (prompted 2699 by report by Christopher Parker). 2700 + update config.sub 2701 2702 20060909 2703 + add kon, kon2 and jfbterm terminfo entry (request by Till Maas) -TD 2704 + remove invis capability from klone+sgr, mainly used by linux entry, 2705 since it does not really do this -TD 2706 2707 20060903 2708 + correct logic in wadd_wch() and wecho_wch(), which did not guard 2709 against passing the multi-column attribute into a call on waddch(), 2710 e.g., using data returned by win_wch() (cf: 20041023) 2711 (report by Sadrul H Chowdhury). 2712 2713 20060902 2714 + fix kterm's acsc string -TD 2715 + fix for change to tic/infocmp in 20060819 to ensure no blank is 2716 embedded into a termcap description. 2717 + workaround for 20050806 ifdef's change to allow visbuf.c to compile 2718 when using --with-termlib --with-trace options. 2719 + improve tgetstr() by making the return value point into the user's 2720 buffer, if provided (patch by Miroslav Lichvar (see Redhat Bugzilla 2721 #202480)). 2722 + correct libraries needed for foldkeys (report by Stanislav Ievlev) 2723 2724 20060826 2725 + add terminfo entries for xfce terminal (xfce) and multi gnome 2726 terminal (mgt) -TD 2727 + add test/foldkeys.c 2728 2729 20060819 2730 + modify tic and infocmp to avoid writing trailing blanks on terminfo 2731 source output (Debian #378783). 2732 + modify configure script to ensure that if the C compiler is used 2733 rather than the loader in making shared libraries, the $(CFLAGS) 2734 variable is also used (Redhat Bugzilla #199369). 2735 + port hashed-db code to db2 and db3. 2736 + fix a bug in tgetent() from 20060625 and 20060715 changes 2737 (patch/analysis by Miroslav Lichvar (see Redhat Bugzilla #202480)). 2738 2739 20060805 2740 + updated xterm function-keys terminfo to match xterm #216 -TD 2741 + add configure --with-hashed-db option (tested only with FreeBSD 6.0, 2742 e.g., the db 1.8.5 interface). 2743 2744 20060729 2745 + modify toe to access termcap data, e.g., via cgetent() functions, 2746 or as a text file if those are not available. 2747 + use _nc_basename() in tset to improve $SHELL check for csh/sh. 2748 + modify _nc_read_entry() and _nc_read_termcap_entry() so infocmp, 2749 can access termcap data when the terminfo database is disabled. 2750 2751 20060722 2752 + widen the test for xterm kmous a little to allow for other strings 2753 than \E[M, e.g., for xterm-sco functionality in xterm. 2754 + update xterm-related terminfo entries to match xterm patch #216 -TD 2755 + update config.guess, config.sub 2756 2757 20060715 2758 + fix for install-rule in Ada95 to add terminal_interface.ads 2759 and terminal_interface.ali (anonymous posting in comp.lang.ada). 2760 + correction to manpage for getcchar() (report by William McBrine). 2761 + add test/chgat.c 2762 + modify wchgat() to mark updated cells as changed so a refresh will 2763 repaint those cells (comments by Sadrul H Chowdhury and William 2764 McBrine). 2765 + split up dependency of names.c and codes.c in ncurses/Makefile to 2766 work with parallel make (report/analysis by Joseph S Myers). 2767 + suppress a warning message (which is ignored) for systems without 2768 an ldconfig program (patch by Justin Hibbits). 2769 + modify configure script --disable-symlinks option to allow one to 2770 disable symlink() in tic even when link() does not work (report by 2771 Nigel Horne). 2772 + modify MKfallback.sh to use tic -x when constructing fallback tables 2773 to allow extended capabilities to be retrieved from a fallback entry. 2774 + improve leak-checking logic in tgetent() from 20060625 to ensure that 2775 it does not free the current screen (report by Miroslav Lichvar). 2776 2777 20060708 2778 + add a check for _POSIX_VDISABLE in tset (NetBSD #33916). 2779 + correct _nc_free_entries() and related functions used for memory leak 2780 checking of tic. 2781 2782 20060701 2783 + revert a minor change for magic-cookie support from 20060513, which 2784 caused unexpected reset of attributes, e.g., when resizing test/view 2785 in color mode. 2786 + note in clear manpage that the program ignores command-line 2787 parameters (prompted by Debian #371855). 2788 + fixes to make lib_gen.c build properly with changes to the configure 2789 --disable-macros option and NCURSES_NOMACROS (cf: 20060527) 2790 + update/correct several terminfo entries -TD 2791 + add some notes regarding copyright to terminfo.src -TD 2792 2793 20060625 2794 + fixes to build Ada95 binding with gnat-4.1.0 2795 + modify read_termtype() so the term_names data is always allocated as 2796 part of the str_table, a better fix for a memory leak (cf: 20030809). 2797 + reduce memory leaks in repeated calls to tgetent() by remembering the 2798 last TERMINAL* value allocated to hold the corresponding data and 2799 freeing that if the tgetent() result buffer is the same as the 2800 previous call (report by "Matt" for FreeBSD gnu/98975). 2801 + modify tack to test extended capability function-key strings. 2802 + improved gnome terminfo entry (GenToo #122566). 2803 + improved xterm-256color terminfo entry (patch by Alain Bench). 2804 2805 20060617 2806 + fix two small memory leaks related to repeated tgetent() calls 2807 with TERM=screen (report by "Matt" for FreeBSD gnu/98975). 2808 + add --enable-signed-char to simplify Debian package. 2809 + reduce name-pollution in term.h by removing #define's for HAVE_xxx 2810 symbols. 2811 + correct typo in curs_terminfo.3x (Debian #369168). 2812 2813 20060603 2814 + enable the mouse in test/movewindow.c 2815 + improve a limit-check in frm_def.c (John Heasley). 2816 + minor copyright fixes. 2817 + change configure script to produce test/Makefile from data file. 2818 2819 20060527 2820 + add a configure option --enable-wgetch-events to enable 2821 NCURSES_WGETCH_EVENTS, and correct the associated loop-logic in 2822 lib_twait.c (report by Bernd Jendrissek). 2823 + remove include/nomacros.h from build, since the ifdef for 2824 NCURSES_NOMACROS makes that obsolete. 2825 + add entrypoints for some functions which were only provided as macros 2826 to make NCURSES_NOMACROS ifdef work properly: getcurx(), getcury(), 2827 getbegx(), getbegy(), getmaxx(), getmaxy(), getparx() and getpary(), 2828 wgetbkgrnd(). 2829 + provide ifdef for NCURSES_NOMACROS which suppresses most macro 2830 definitions from curses.h, i.e., where a macro is defined to override 2831 a function to improve performance. Allowing a developer to suppress 2832 these definitions can simplify some application (discussion with 2833 Stanislav Ievlev). 2834 + improve description of memu/meml in terminfo manpage. 2835 2836 20060520 2837 + if msgr is false, reset video attributes when doing an automargin 2838 wrap to the next line. This makes the ncurses 'k' test work properly 2839 for hpterm. 2840 + correct caching of keyname(), which was using only half of its table. 2841 + minor fixes to memory-leak checking. 2842 + make SCREEN._acs_map and SCREEN._screen_acs_map pointers rather than 2843 arrays, making ACS_LEN less visible to applications (suggested by 2844 Stanislav Ievlev). 2845 + move chunk in SCREEN ifdef'd for USE_WIDEC_SUPPORT to the end, so 2846 _screen_acs_map will have the same offset in both ncurses/ncursesw, 2847 making the corresponding tinfo/tinfow libraries binary-compatible 2848 (cf: 20041016, report by Stanislav Ievlev). 2849 2850 20060513 2851 + improve debug-tracing for EmitRange(). 2852 + change default for --with-develop to "yes". Add NCURSES_NO_HARD_TABS 2853 and NCURSES_NO_MAGIC_COOKIE environment variables to allow runtime 2854 suppression of the related hard-tabs and xmc-glitch features. 2855 + add ncurses version number to top-level manpages, e.g., ncurses, tic, 2856 infocmp, terminfo as well as form, menu, panel. 2857 + update config.guess, config.sub 2858 + modify ncurses.c to work around a bug in NetBSD 3.0 curses 2859 (field_buffer returning null for a valid field). The 'r' test 2860 appears to not work with that configuration since the new_fieldtype() 2861 function is broken in that implementation. 2862 2863 20060506 2864 + add hpterm-color terminfo entry -TD 2865 + fixes to compile test-programs with HPUX 11.23 2866 2867 20060422 2868 + add copyright notices to files other than those that are generated, 2869 data or adapted from pdcurses (reports by William McBrine, David 2870 Taylor). 2871 + improve rendering on hpterm by not resetting attributes at the end 2872 of doupdate() if the terminal has the magic-cookie feature (report 2873 by Bernd Rieke). 2874 + add 256color variants of terminfo entries for programs which are 2875 reported to implement this feature -TD 2876 2877 20060416 2878 + fix typo in change to NewChar() macro from 20060311 changes, which 2879 broke tab-expansion (report by Frederic L W Meunier). 2880 2881 20060415 2882 + document -U option of tic and infocmp. 2883 + modify tic/infocmp to suppress smacs/rmacs when acsc is suppressed 2884 due to size limit, e.g., converting to termcap format. Also 2885 suppress them if the output format does not contain acsc and it 2886 was not VT100-like, i.e., a one-one mapping (Novell #163715). 2887 + add configure check to ensure that SIGWINCH is defined on platforms 2888 such as OS X which exclude that when _XOPEN_SOURCE, etc., are 2889 defined (report by Nicholas Cole) 2890 2891 20060408 2892 + modify write_object() to not write coincidental extensions of an 2893 entry made due to it being referenced in a use= clause (report by 2894 Alain Bench). 2895 + another fix for infocmp -i option, which did not ensure that some 2896 escape sequences had comparable prefixes (report by Alain Bench). 2897 2898 20060401 2899 + improve discussion of init/reset in terminfo and tput manpages 2900 (report by Alain Bench). 2901 + use is3 string for a fallback of rs3 in the reset program; it was 2902 using is2 (report by Alain Bench). 2903 + correct logic for infocmp -i option, which did not account for 2904 multiple digits in a parameter (cf: 20040828) (report by Alain 2905 Bench). 2906 + move _nc_handle_sigwinch() to lib_setup.c to make --with-termlib 2907 option work after 20060114 changes (report by Arkadiusz Miskiewicz). 2908 + add copyright notices to test-programs as needed (report by William 2909 McBrine). 2910 2911 20060318 2912 + modify ncurses.c 'F' test to combine the wide-characters with color 2913 and/or video attributes. 2914 + modify test/ncurses to use CTL/Q or ESC consistently for exiting 2915 a test-screen (some commands used 'x' or 'q'). 2916 2917 20060312 2918 + fix an off-by-one in the scrolling-region change (cf_ 20060311). 2919 2920 20060311 2921 + add checks in waddchnstr() and wadd_wchnstr() to stop copying when 2922 a null character is found (report by Igor Bogomazov). 2923 + modify progs/Makefile.in to make "tput init" work properly with 2924 cygwin, i.e., do not pass a ".exe" in the reference string used 2925 in check_aliases (report by Samuel Thibault). 2926 + add some checks to ensure current position is within scrolling 2927 region before scrolling on a new line (report by Dan Gookin). 2928 + change some NewChar() usage to static variables to work around 2929 stack garbage introduced when cchar_t is not packed (Redhat #182024). 2930 2931 20060225 2932 + workarounds to build test/movewindow with PDcurses 2.7. 2933 + fix for nsterm-16color entry (patch by Alain Bench). 2934 + correct a typo in infocmp manpage (Debian #354281). 2935 2936 20060218 2937 + add nsterm-16color entry -TD 2938 + updated mlterm terminfo entry -TD 2939 + remove 970913 feature for copying subwindows as they are moved in 2940 mvwin() (discussion with Bryan Christ). 2941 + modify test/demo_menus.c to demonstrate moving a menu (both the 2942 window and subwindow) using shifted cursor-keys. 2943 + start implementing recursive mvwin() in movewindow.c (incomplete). 2944 + add a fallback definition for GCC_PRINTFLIKE() in test.priv.h, 2945 for movewindow.c (report by William McBrine). 2946 + add help-message to test/movewindow.c 2947 2948 20060211 2949 + add test/movewindow.c, to test mvderwin(). 2950 + fix ncurses soft-key test so color changes are shown immediately 2951 rather than delayed. 2952 + modify ncurses soft-key test to hide the keys when exiting the test 2953 screen. 2954 + fixes to build test programs with PDCurses 2.7, e.g., its headers 2955 rely on autoconf symbols, and it declares stubs for nonfunctional 2956 terminfo and termcap entrypoints. 2957 2958 20060204 2959 + improved test/configure to build test/ncurses on HPUX 11 using the 2960 vendor curses. 2961 + documented ALTERNATE CONFIGURATIONS in the ncurses manpage, for the 2962 benefit of developers who do not read INSTALL. 2963 2964 20060128 2965 + correct form library Window_To_Buffer() change (cf: 20040516), which 2966 should ignore the video attributes (report by Ricardo Cantu). 2967 2968 20060121 2969 + minor fixes to xmc-glitch experimental code: 2970 + suppress line-drawing 2971 + implement max_attributes 2972 tested with xterm. 2973 + minor fixes for the database iterator. 2974 + fix some buffer limits in c++ demo (comment by Falk Hueffner in 2975 Debian #348117). 2976 2977 20060114 2978 + add toe -a option, to show all databases. This uses new private 2979 interfaces in the ncurses library for iterating through the list of 2980 databases. 2981 + fix toe from 20000909 changes which made it not look at 2982 $HOME/.terminfo 2983 + make toe's -v option parameter optional as per manpage. 2984 + improve SIGWINCH handling by postponing its effect during newterm(), 2985 etc., when allocating screens. 2986 2987 20060111 2988 + modify wgetnstr() to return KEY_RESIZE if a sigwinch occurs. Use 2989 this in test/filter.c 2990 + fix an error in filter() modification which caused some applications 2991 to fail. 2992 2993 20060107 2994 + check if filter() was called when getting the screensize. Keep it 2995 at 1 if so (based on Redhat #174498). 2996 + add extension nofilter(). 2997 + refined the workaround for ACS mapping. 2998 + make ifdef's consistent in curses.h for the extended colors so the 2999 header file can be used for the normal curses library. The header 3000 file installed for extended colors is a variation of the 3001 wide-character configuration (report by Frederic L W Meunier). 3002 3003 20051231 3004 + add a workaround to ACS mapping to allow applications such as 3005 test/blue.c to use the "PC ROM" characters by masking them with 3006 A_ALTCHARSET. This worked up til 5.5, but was lost in the revision 3007 of legacy coding (report by Michael Deutschmann). 3008 + add a null-pointer check in the wide-character version of 3009 calculate_actual_width() (report by Victor Julien). 3010 + improve test/ncurses 'd' (color-edit) test by allowing the RGB 3011 values to be set independently (patch by William McBrine). 3012 + modify test/configure script to allow building test programs with 3013 PDCurses/X11. 3014 + modified test programs to allow some to work with NetBSD curses. 3015 Several do not because NetBSD curses implements a subset of X/Open 3016 curses, and also lacks much of SVr4 additions. But it's enough for 3017 comparison. 3018 + update config.guess and config.sub 3019 3020 20051224 3021 + use BSD-specific fix for return-value from cgetent() from CVS where 3022 an unknown terminal type would be reportd as "database not found". 3023 + make tgetent() return code more readable using new symbols 3024 TGETENT_YES, etc. 3025 + remove references to non-existent "tctest" program. 3026 + remove TESTPROGS from progs/Makefile.in (it was referring to code 3027 that was never built in that directory). 3028 + typos in curs_addchstr.3x, some doc files (noticed in OpenBSD CVS). 3029 3030 20051217 3031 + add use_legacy_coding() function to support lynx's font-switching 3032 feature. 3033 + fix formatting in curs_termcap.3x (report by Mike Frysinger). 3034 + modify MKlib_gen.sh to change preprocessor-expanded _Bool back to 3035 bool. 3036 3037 20051210 3038 + extend test/ncurses.c 's' (overlay window) test to exercise overlay(), 3039 overwrite() and copywin() with different combinations of colors and 3040 attributes (including background color) to make it easy to see the 3041 effect of the different functions. 3042 + corrections to menu/m_global.c for wide-characters (report by 3043 Victor Julien). 3044 3045 20051203 3046 + add configure option --without-dlsym, allowing developers to 3047 configure GPM support without using dlsym() (discussion with Michael 3048 Setzer). 3049 + fix wins_nwstr(), which did not handle single-column non-8bit codes 3050 (Debian #341661). 3051 3052 20051126 3053 + move prototypes for wide-character trace functions from curses.tail 3054 to curses.wide to avoid accidental reference to those if 3055 _XOPEN_SOURCE_EXTENDED is defined without ensuring that <wchar.h> is 3056 included. 3057 + add/use NCURSES_INLINE definition. 3058 + change some internal functions to use int/unsigned rather than the 3059 short equivalents. 3060 3061 20051119 3062 + remove a redundant check in lib_color.c (Debian #335655). 3063 + use ld's -search_paths_first option on Darwin to work around odd 3064 search rules on that platform (report by Christian Gennerat, analysis 3065 by Andrea Govoni). 3066 + remove special case for Darwin in CF_XOPEN_SOURCE configure macro. 3067 + ignore EINTR in tcgetattr/tcsetattr calls (Debian #339518). 3068 + fix several bugs in test/bs.c (patch by Stephen Lindholm). 3069 3070 20051112 3071 + other minor fixes to cygwin based on tack -TD 3072 + correct smacs in cygwin (Debian #338234, report by Baurzhan 3073 Ismagulov, who noted that it was fixed in Cygwin). 3074 3075 20051029 3076 + add shifted up/down arrow codes to xterm-new as kind/kri strings -TD 3077 + modify wbkgrnd() to avoid clearing the A_CHARTEXT attribute bits 3078 since those record the state of multicolumn characters (Debian 3079 #316663). 3080 + modify werase to clear multicolumn characters that extend into 3081 a derived window (Debian #316663). 3082 3083 20051022 3084 + move assignment from environment variable ESCDELAY from initscr() 3085 down to newterm() so the environment variable affects timeouts for 3086 terminals opened with newterm() as well. 3087 + fix a memory leak in keyname(). 3088 + add test/demo_altkeys.c 3089 + modify test/demo_defkey.c to exit from loop via 'q' to allow 3090 leak-checking, as well as fix a buffer size in winnstr() call. 3091 3092 20051015 3093 + correct order of use-clauses in rxvt-basic entry which made codes for 3094 f1-f4 vt100-style rather than vt220-style (report by Gabor Z Papp). 3095 + suppress configure check for gnatmake if Ada95/Makefile.in is not 3096 found. 3097 + correct a typo in configure --with-bool option for the case where 3098 --without-cxx is used (report by Daniel Jacobowitz). 3099 + add a note to INSTALL's discussion of --with-normal, pointing out 3100 that one may wish to use --without-gpm to ensure a completely 3101 static link (prompted by report by Felix von Leitner). 3102 3103 20051010 5.5 release for upload to 3104 3105 20051008 3106 + document in demo_forms.c some portability issues. 3107 3108 20051001 3109 + document side-effect of werase() which sets the cursor position. 3110 + save/restore the current position in form field editing to make 3111 overlay mode work. 3112 3113 20050924 3114 + correct header dependencies in progs, allowing parallel make (report 3115 by Daniel Jacobowitz). 3116 + modify CF_BUILD_CC to ensure that pre-setting $BUILD_CC overrides 3117 the configure check for --with-build-cc (report by Daniel Jacobowitz). 3118 + modify CF_CFG_DEFAULTS to not use /usr as the default prefix for 3119 NetBSD. 3120 + update config.guess and config.sub from 3121 3122 3123 20050917 3124 + modify sed expression which computes path for /usr/lib/terminfo 3125 symbolic link in install to ensure that it does not change unexpected 3126 levels of the path (Gentoo #42336). 3127 + modify default for --disable-lp64 configure option to reduce impact 3128 on existing 64-bit builds. Enabling the _LP64 option may change the 3129 size of chtype and mmask_t. However, for ABI 6, it is enabled by 3130 default (report by Mike Frysinger). 3131 + add configure script check for --enable-ext-mouse, bump ABI to 6 by 3132 default if it is used. 3133 + improve configure script logic for bumping ABI to omit this if the 3134 --with-abi-version option was used. 3135 + update address for Free Software Foundation in tack's source. 3136 + correct wins_wch(), which was not marking the filler-cells of 3137 multi-column characters (cf: 20041023). 3138 3139 20050910 3140 + modify mouse initialization to ensure that Gpm_Open() is called only 3141 once. Otherwise GPM gets confused in its initialization of signal 3142 handlers (Debian #326709). 3143 3144 20050903 3145 + modify logic for backspacing in a multiline form field to ensure that 3146 it works even when the preceding line is full (report by Frank van 3147 Vugt). 3148 + remove comment about BUGS section of ncurses manpage (Debian #325481) 3149 3150 20050827 3151 + document some workarounds for shared and libtool library 3152 configurations in INSTALL (see --with-shared and --with-libtool). 3153 + modify CF_GCC_VERSION and CF_GXX_VERSION macros to accommodate 3154 cross-compilers which emit the platform name in their version 3155 message, e.g., 3156 arm-sa1100-linux-gnu-g++ (GCC) 4.0.1 3157 (report by Frank van Vugt). 3158 3159 20050820 3160 + start updating documentation for upcoming 5.5 release. 3161 + fix to make libtool and libtinfo work together again (cf: 20050122). 3162 + fixes to allow building traces into libtinfo 3163 + add debug trace to tic that shows if/how ncurses will write to the 3164 lower corner of a terminal's screen. 3165 + update llib-l* files. 3166 3167 20050813 3168 + modify initializers in c++ binding to build with old versions of g++. 3169 + improve special case for 20050115 repainting fix, ensuring that if 3170 the first changed cell is not a character that the range to be 3171 repainted is adjusted to start at a character's beginning (Debian 3172 #316663). 3173 3174 20050806 3175 + fixes to build on QNX 6.1 3176 + improve configure script checks for Intel 9.0 compiler. 3177 + remove #include's for libc.h (obsolete). 3178 + adjust ifdef's in curses.priv.h so that when cross-compiling to 3179 produce comp_hash and make_keys, no dependency on wchar.h is needed. 3180 That simplifies the build-cppflags (report by Frank van Vugt). 3181 + move modules related to key-binding into libtinfo to fix linkage 3182 problem caused by 20050430 changes to MKkeyname.sh (report by 3183 Konstantin Andreev). 3184 3185 20050723 3186 + updates/fixes for configure script macros from vile -TD 3187 + make prism9's sgr string agree with the rest of the terminfo -TD 3188 + make vt220's sgr0 string consistent with sgr string, do this for 3189 several related cases -TD 3190 + improve translation to termcap by filtering the 'me' (sgr0) strings 3191 as in the runtime call to tgetent() (prompted by a discussion with 3192 Thomas Klausner). 3193 + improve tic check for sgr0 versus sgr(0), to help ensure that sgr0 3194 resets line-drawing. 3195 3196 20050716 3197 + fix special cases for trimming sgr0 for hurd and vt220 (Debian 3198 #318621). 3199 + split-out _nc_trim_sgr0() from modifications made to tgetent(), to 3200 allow it to be used by tic to provide information about the runtime 3201 changes that would be made to sgr0 for termcap applications. 3202 + modify make_sed.sh to make the group-name in the NAME section of 3203 form/menu library manpage agree with the TITLE string when renaming 3204 is done for Debian (Debian #78866). 3205 3206 20050702 3207 + modify parameter type in c++ binding for insch() and mvwinsch() to 3208 be consistent with underlying ncurses library (was char, is chtype). 3209 + modify treatment of Intel compiler to allow _GNU_SOURCE to be defined 3210 on Linux. 3211 + improve configure check for nanosleep(), checking that it works since 3212 some older systems such as AIX 4.3 have a nonworking version. 3213 3214 20050625 3215 + update config.guess and config.sub from 3216 3217 + modify misc/shlib to work in test-directory. 3218 + suppress $suffix in misc/run_tic.sh when cross-compiling. This 3219 allows cross-compiles to use the host's tic program to handle the 3220 "make install.data" step. 3221 + improve description of $LINES and $COLUMNS variables in manpages 3222 (prompted by report by Dave Ulrick). 3223 + improve description of cross-compiling in INSTALL 3224 + add NCURSES-Programming-HOWTO.html by Pradeep Padala 3225 (see). 3226 + modify configure script to obtain soname for GPM library (discussion 3227 with Daniel Jacobowitz). 3228 + modify configure script so that --with-chtype option will still 3229 compute the unsigned literals suffix for constants in curses.h 3230 (report by Daniel Jacobowitz: 3231 + patches from Daniel Jacobowitz: 3232 + the man_db.renames entry for tack.1 was backwards. 3233 + tack.1 had some 1m's that should have been 1M's. 3234 + the section for curs_inwstr.3 was wrong. 3235 3236 20050619 3237 + correction to --with-chtype option (report by Daniel Jacobowitz). 3238 3239 20050618 3240 + move build-time edit_man.sh and edit_man.sed scripts to top directory 3241 to simplify reusing them for renaming tack's manpage (prompted by a 3242 review of Debian package). 3243 + revert minor optimization from 20041030 (Debian #313609). 3244 + libtool-specific fixes, tested with libtool 1.4.3, 1.5.0, 1.5.6, 3245 1.5.10 and 1.5.18 (all work except as noted previously for the c++ 3246 install using libtool 1.5.0): 3247 + modify the clean-rule in c++/Makefile.in to work with IRIX64 make 3248 program. 3249 + use $(LIBTOOL_UNINSTALL) symbol, overlooked in 20030830 3250 + add configure options --with-chtype and --with-mmask-t, to allow 3251 overriding of the non-LP64 model's use of the corresponding types. 3252 + revise test for size of chtype (and mmask_t), which always returned 3253 "long" due to an uninitialized variable (report by Daniel Jacobowitz). 3254 3255 20050611 3256 + change _tracef's that used "%p" format for va_list values to ignore 3257 that, since on some platforms those are not pointers. 3258 + fixes for long-formats in printf's due to largefile support. 3259 3260 20050604 3261 + fixes for termcap support: 3262 + reset pointer to _nc_curr_token.tk_name when the input stream is 3263 closed, which could point to free memory (cf: 20030215). 3264 + delink TERMTYPE data which is used by the termcap reader, so that 3265 extended names data will be freed consistently. 3266 + free pointer to TERMTYPE data in _nc_free_termtype() rather than 3267 its callers. 3268 + add some entrypoints for freeing permanently allocated data via 3269 _nc_freeall() when NO_LEAKS is defined. 3270 + amend 20041030 change to _nc_do_color to ensure that optimization is 3271 applied only when the terminal supports back_color_erase (bce). 3272 3273 20050528 3274 + add sun-color terminfo entry -TD 3275 + correct a missing assignment in c++ binding's method 3276 NCursesPanel::UserPointer() from 20050409 changes. 3277 + improve configure check for large-files, adding check for dirent64 3278 from vile -TD 3279 + minor change to configure script to improve linker options for the 3280 Ada95 tree. 3281 3282 20050515 3283 + document error conditions for ncurses library functions (report by 3284 Stanislav Ievlev). 3285 + regenerated html documentation for ada binding. 3286 see 3287 3288 20050507 3289 + regenerated html documentation for manpages. 3290 + add $(BUILD_EXEEXT) suffix to invocation of make_keys in 3291 ncurses/Makefile (Gentoo #89772). 3292 + modify c++/demo.cc to build with g++ -fno-implicit-templates option 3293 (patch by Mike Frysinger). 3294 + modify tic to filter out long extended names when translating to 3295 termcap format. Only two characters are permissible for termcap 3296 capability names. 3297 3298 20050430 3299 + modify terminfo entries xterm-new and rxvt to add strings for 3300 shift-, control-cursor keys. 3301 + workaround to allow c++ binding to compile with g++ 2.95.3, which 3302 has a broken implementation of static_cast<> (patch by Jeff Chua). 3303 + modify initialization of key lookup table so that if an extended 3304 capability (tic -x) string is defined, and its name begins with 'k', 3305 it will automatically be treated as a key. 3306 + modify test/keynames.c to allow for the possibility of extended 3307 key names, e.g., via define_key(), or via "tic -x". 3308 + add test/demo_termcap.c to show the contents of given entry via the 3309 termcap interface. 3310 3311 20050423 3312 + minor fixes for vt100/vt52 entries -TD 3313 + add configure option --enable-largefile 3314 + corrected libraries used to build Ada95/gen/gen, found in testing 3315 gcc 4.0.0. 3316 3317 20050416 3318 + update config.guess, config.sub 3319 + modify configure script check for _XOPEN_SOURCE, disable that on 3320 Darwin whose header files have problems (patch by Chris Zubrzycki). 3321 + modify form library Is_Printable_String() to use iswprint() rather 3322 than wcwidth() for determining if a character is printable. The 3323 latter caused it to reject menu items containing non-spacing 3324 characters. 3325 + modify ncurses test program's F-test to handle non-spacing characters 3326 by combining them with a reverse-video blank. 3327 + review/fix several gcc -Wconversion warnings. 3328 3329 20050409 3330 + correct an off-by-one error in m_driver() for mouse-clicks used to 3331 position the mouse to a particular item. 3332 + implement test/demo_menus.c 3333 + add some checks in lib_mouse to ensure SP is set. 3334 + modify C++ binding to make 20050403 changes work with the configure 3335 --enable-const option. 3336 3337 20050403 3338 + modify start_color() to return ERR if it cannot allocate memory. 3339 + address g++ compiler warnings in C++ binding by adding explicit 3340 member initialization, assignment operators and copy constructors. 3341 Most of the changes simply preserve the existing semantics of the 3342 binding, which can leak memory, etc., but by making these features 3343 visible, it provides a framework for improving the binding. 3344 + improve C++ binding using static_cast, etc. 3345 + modify configure script --enable-warnings to add options to g++ to 3346 correspond to the gcc --enable-warnings. 3347 + modify C++ binding to use some C internal functions to make it 3348 compile properly on Solaris (and other platforms). 3349 3350 20050327 3351 + amend change from 20050320 to limit it to configurations with a 3352 valid locale. 3353 + fix a bug introduced in 20050320 which broke the translation of 3354 nonprinting characters to uparrow form (report by Takahashi Tamotsu). 3355 3356 20050326 3357 + add ifdef's for _LP64 in curses.h to avoid using wasteful 64-bits for 3358 chtype and mmask_t, but add configure option --disable-lp64 in case 3359 anyone used that configuration. 3360 + update misc/shlib script to account for Mac OS X (report by Michail 3361 Vidiassov). 3362 + correct comparison for wrapping multibyte characters in 3363 waddch_literal() (report by Takahashi Tamotsu). 3364 3365 20050320 3366 + add -c and -w options to tset to allow user to suppress ncurses' 3367 resizing of the terminal emulator window in the special case where it 3368 is not able to detect the true size (report by Win Delvaux, Debian 3369 #300419). 3370 + modify waddch_nosync() to account for locale zn_CH.GBK, which uses 3371 codes 128-159 as part of multibyte characters (report by Wang 3372 WenRui, Debian #300512). 3373 3374 20050319 3375 + modify ncurses.c 'd' test to make it work with 88-color 3376 configuration, i.e., by implementing scrolling. 3377 + improve scrolling in ncurses.c 'c' and 'C' tests, e.g., for 88-color 3378 configuration. 3379 3380 20050312 3381 + change tracemunch to use strict checking. 3382 + modify ncurses.c 'p' test to test line-drawing within a pad. 3383 + implement environment variable NCURSES_NO_UTF8_ACS to support 3384 miscellaneous terminal emulators which ignore alternate character 3385 set escape sequences when in UTF-8 mode. 3386 3387 20050305 3388 + change NCursesWindow::err_handler() to a virtual function (request by 3389 Steve Beal). 3390 + modify fty_int.c and fty_num.c to handle wide characters (report by 3391 Wolfgang Gutjahr). 3392 + adapt fix for fty_alpha.c to fty_alnum.c, which also handled normal 3393 and wide characters inconsistently (report by Wolfgang Gutjahr). 3394 + update llib-* files to reflect internal interface additions/changes. 3395 3396 20050226 3397 + improve test/configure script, adding tests for _XOPEN_SOURCE, etc., 3398 from lynx. 3399 + add aixterm-16color terminfo entry -TD 3400 + modified xterm-new terminfo entry to work with tgetent() changes -TD 3401 + extended changes in tgetent() from 20040710 to allow the substring of 3402 sgr0 which matches rmacs to be at the beginning of the sgr0 string 3403 (request by Thomas Wolff). Wolff says the visual effect in 3404 combination with pre-20040710 ncurses is improved. 3405 + fix off-by-one in winnstr() call which caused form field validation 3406 of multibyte characters to ignore the last character in a field. 3407 + correct logic in winsch() for inserting multibyte strings; the code 3408 would clear cells after the insertion rather than push them to the 3409 right (cf: 20040228). 3410 + fix an inconsistency in Check_Alpha_Field() between normal and wide 3411 character logic (report by Wolfgang Gutjahr). 3412 3413 20050219 3414 + fix a bug in editing wide-characters in form library: deleting a 3415 nonwide character modified the previous wide-character. 3416 + update manpage to describe NCURSES_MOUSE_VERSION 2. 3417 + correct manpage description of mouseinterval() (Debian #280687). 3418 + add a note to default_colors.3x explaining why this extension was 3419 added (Debian #295083). 3420 + add traces to panel library. 3421 3422 20050212 3423 + improve editing of wide-characters in form library: left/right 3424 cursor movement, and single-character deletions work properly. 3425 + disable GPM mouse support when $TERM happens to be prefixed with 3426 "xterm". Gpm_Open() would otherwise assert that it can deal with 3427 mouse events in this case. 3428 + modify GPM mouse support so it closes the server connection when 3429 the caller disables the mouse (report by Stanislav Ievlev). 3430 3431 20050205 3432 + add traces for callback functions in form library. 3433 + add experimental configure option --enable-ext-mouse, which defines 3434 NCURSES_MOUSE_VERSION 2, and modifies the encoding of mouse events to 3435 support wheel mice, which may transmit buttons 4 and 5. This works 3436 with xterm and similar X terminal emulators (prompted by question by 3437 Andreas Henningsson, this is also related to Debian #230990). 3438 + improve configure macros CF_XOPEN_SOURCE and CF_POSIX_C_SOURCE to 3439 avoid redefinition warnings on cygwin. 3440 3441 20050129 | http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=d69e57e1b31c4a714e5f5e320694b058f050eacd;hb=beb0f0c6911096ee19815bdf2601c4317d80341f | CC-MAIN-2022-40 | refinedweb | 25,586 | 66.13 |
How to implement Firebase remote config in iOS
Change your app behavior without publishing a new release using Firebase remote config
Introduction
Suppose you want to change your app theme’s color daily, based on your random choice. So will you release a new build daily with different colors?
No, right? That is practically not possible. Here comes the Firebase remote config.
Firebase remote config allows you to change the behavior of your app on the fly, without releasing a new build.
In this firebase tutorial you will learn how to connect your iOS app to the Firebase and then the implementation of Firebase remote config.
Implementation
Create a new iOS project. If you have an existing project then also you will able to follow up on this tutorial.
1. Linking the app with Firebase
Open the firebase and create a new project. I have given “remote config iOS” as my firebase project name.
Go to the project settings and click on the iOS icon to add an iOS app as shown in the below screenshot.
Now you will get something like this, Here you have to give the bundle id of your app.
After clicking on the Register app button you will get a JSON file. Download this JSON file and add to the root of your Xcode project and add it to all targets.
Add the json file using the Xcode.
Till here we have added our project to firebase console, now we have to add Firebase SDK to our app. To install the Firebase SDK we will use the Cocoa-pods dependency manager.
Close the Xcode and open a terminal window and navigate to the location of the Xcode project for your app and run the command
pod init, it will create a pod file. Open this pod file using some editor and add
pod 'Firebase/RemoteConfig'. Your pod file should look like this.
Now run
pod install. It will add firebase SDK to your iOS app
If you want to add some more firebase dependency then check this list
A new workspace fill will be created, now onwards you have to use
your-project.xcworkspace only to open your project.
This is the last step 😅. Open the AppDelegate.swift and add this one line of code in
didFinishLaunchingWithOptions function.
FirebaseApp.configure()
Till here we have successfully added the firebase into our project.
2. Add remote config key value in firebase
In this step, we will add a key and its value on the firebase remote config page. Open the Remote Config dashboard from the left side menu.
I have set “welcome_text” as the key. Using this key only we will be able to fetch the corresponding data in our app.
Do not forget to publish your changes after updating the key and value.
3. Fetch and update the UILabel’s text using remote config
To demonstrate the Firebase remote config, I am taking a. very basic app design. This app will have one UILablel. I will show, how you can change its text value using the remote config.
Create a UILabel on Main.storyboard and make its reference in ViewController.swift.
Here is the complete code.
import UIKit import Firebase class ViewController: UIViewController { @IBOutlet weak var welcomeLabel: UILabel! var remoteConfig: RemoteConfig! //Your firebase key let welcome_key = "welcome_text" override func viewDidLoad() { super.viewDidLoad() // [START get_remote_config_instance] remoteConfig = RemoteConfig.remoteConfig() setupRemoteDefaultValues() fetchConfig() } //Setting up default value. This value will be used when firebase //has is facing some issue while fetching the updated value from the server func setupRemoteDefaultValues() { let defaults = [ "welcome_text": "This is default" as NSObject ] remoteConfig.setDefaults(defaults) } func fetchConfig() { welcomeLabel.text = remoteConfig[welcome_key].stringValue // [START fetch_config_with_callback] remoteConfig.fetch(withExpirationDuration: 0) { (status, error) in if status == .success { print("Config fetched!") self.remoteConfig.activate() { (error) in // ... } } else { print("Config not fetched") print("Error: \(error?.localizedDescription ?? "No error available.")") } self.displayWelcome() } // [END fetch_config_with_callback] } func displayWelcome() { // [START get_config_value] let welcomeMessage = remoteConfig[welcome_key].stringValue // [END get_config_value] welcomeLabel.text = welcomeMessage } }
When you run this code, you will get something like this. | https://www.warmodroid.xyz/tutorial/ios/how-to-implement-firebase-remote-config-in-ios/ | CC-MAIN-2020-50 | refinedweb | 668 | 59.7 |
Subject: Re: [boost] numeric_cast
From: Fernando Cacciola (fernando.cacciola_at_[hidden])
Date: 2011-06-22 15:38:33
On 6/22/2011 2:15 PM, Vicente Botet wrote:
>
> Antony Polukhin wrote:
>>
>> 2011/6/22 Brandon Kohn<blkohn_at_[hidden]>:
>>> The defaults of numeric_cast_traits definition conform to the current
>>> behavior. It would only change if the users customize it. From that angle
>>> I
>>> think the greater flexibility comes with a bit more responsibility on the
>>> users side. Seems like a fair trade-off to me.
>>
>> Well, then here is some ideas.
>>
>> I am really afraid that some user will define such trait in header
>> file. Then all the code above the included header file will use the
>> old conversion trait, and below - the new trait. Include files became
>> dependent from their order. It is really error prone. User can forget
>> about the trait, change the inclusion order and get some really hard
>> detectable errors. So it would be more save not to modify the behavior
>> of boost::numeric_cast<N>(x) function, but rather create function with
>> some other name, for example boost::numeric_traited_cast<N>(x) and
>> create a separate namespace in boost::numeric for the traits.
>>
>>
>
> You are right that changing the implementation in this way can break some
> code in the future.
> Maybe we should be more conservative.
>
> The library has already a namespace boost::numeric so what about
> boost::numeric::convert_to/convert instead?
>
But this won't address the problem Brandon is trying to solve: namely, to allow
numeric_cast<> --and no other function-- to really work with custom numeric
types (which might have specific policy requirements).
I do understand Antony's concern, but I think is a bit of a red herring.
That is, I doubt it could really happen in practice, even though it technically
could.
In any case, what is really important IMO is the keep the behaviour stable for
the fundamental types. Thus a reasonable balance could be, IMO, that the
additional traits is given already specialized for the fundamental types. This
way, the extensibility is only really available for custom types.
Sort of the way numeric_limits<> works.
Best
-- Fernando Cacciola SciSoft Consulting, Founder
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/06/183060.php | CC-MAIN-2020-40 | refinedweb | 378 | 57.98 |
On Sat, 25 Mar 2000, Guido van Rossum wrote: > >. > > Bullshit, Greg. (I don't normally like to use such strong words, but > since you're being confrontational here...) Fair enough, and point accepted. Sorry. I will say, tho, that you've taken this slightly out of context. The next paragraph explicitly stated that I don't believe you had this intent. I just felt that coming up with a complete plan before doing anything would be prone to failure. You asked to invent a new reason :-), so I said you had one already :-) Confrontational? Yes, guilty as charged. I was a bit frustrated. > I'm all for doing it incrementally -- but I want the plan for how to > do it made up front. That doesn't require all the details to be > worked out -- but it requires a general idea about what kind of things > we will have in the namespace and what kinds of names they get. An > organizing principle, if you like. If we were to decide later that we > go for a Java-like deep hierarchy, the network package would have to > be moved around again -- what a waste. All righty. So I think there is probably a single question that I have here: Moshe posted a large breakdown of how things could be packaged. He and Ping traded a number of comments, and more will be coming as soon as people wake up :-) However, if you are only looking for a "general idea", then should python-dev'ers nit pick the individual modules, or just examine the general breakdown and hierarchy? thx, -g -- Greg Stein, | https://mail.python.org/pipermail/python-dev/2000-March/002847.html | CC-MAIN-2017-04 | refinedweb | 269 | 71.85 |
New firmware release v1.12.0.b1 (Sigfox and LoRa coexistency on the FiPy)
Hello,
This release mainly adds Sigfox and LoRa coexistency on the FiPy. There are some additional bug fixes included:
- esp32: LoRa+Sigfox coexistency on the FiPy.
- esp32: Allow pin interrupts to be triggered even when the cache is disabled.
- esp32: Do not allow timer alarms ISR to run with the cache disabled.
Otherwise the CPU crashes when writing to flash and the interrupt triggers.
- esp32: Solve BLE hang when searching for services.
- esp32: Correct behaviour of Alarm.callback(None). The alarm is correctly cancelled now.
- modpycom.c/pulses_get(): prevent lock-up at long pulses. Thanks to @robert-hh.
- esp32: Use spi_flash_get_chip_size() to determine the flash size. Thanks to @robert-hh.
Cheers,
Daniel
@daniel
Wipy2
I give another try with recent software compared to 1.7.2.b1 {which is stablest for me for wipy2}.
After flash (today):
- connection to router give me timeout (30seconds) but if i ignore this i can connect to the socket in the network - i use static ip setting.
- i2c (
i2c=I2C(0, I2C.MASTER, baudrate=100000, pins=("P9", "P10")))
- bh1750 ok,
- pcf8574 ok
- read from bmp180 read only 0 for preasure but temperature is ok
flashed 3 times and tested 30 times with removing cables (5 times and also simple reset 5 for every flash).
downgraded to 1.7.2.b1 all work ok.
I must bring back my logic analyser to see issue on i2c but maybe you have some hint here?
And also what can be the reason of issue from point 1?
@naveen we will add Japan with the next release of the updater tool. In the meantime, please select Europe as Sigfox region. Thanks.
OK, I am trying to upgrade/register again. Now I see an option "Country Selection" and I choose "Japan" where I live. Next is "SigFox region selection" but I do not find "Japan" in the available options.
- Xykon administrators last edited by
@naveen It seems your sigfox credentials haven't been properly generated during the initial upgrade/device registration.
I have removed your device... can you please try to upgrade/register again?
Mac address: <removed by admin>
@naveen please let us know your WLAN mac address so that we can reset and you can get your Sigfox ID and PAC again.
Please run:
import binascii from network import WLAN print(binascii.hexlify(WLAN().mac()))
And let us know your MAC address. Thanks.
Cheers,
Daniel
Everytime I run "os.uname()", FiPy disconnects and only gives partial output as below:
( Failed to connect (Connection was reset)
After updating and doing:
import os os.uname()
What do you get?
Cheers,
Daniel
I am getting the message shown below but no id or pac info:
Result:
Your device was successfully updated!
Please remove the wire and reset the board.
- Xykon administrators last edited by
@danielm You can do that with the command line version of the tool. I will post some more details about that with the next beta release.
@robert-hh thanks. Found the cause. Will fix ASAP.
@daniel uos.uname causes a core dump on FiPy. See
- Xykon administrators last edited by Xykon
Just in case anyone needs it, I have updated the download links under downgrading for advanced users to include the previous 1.10.2.b1 major release.
I am working on a new firmware update tool that will have downgrade functionality. You should see another beta being released later this week. | https://forum.pycom.io/topic/2371/new-firmware-release-v1-12-0-b1-sigfox-and-lora-coexistency-on-the-fipy | CC-MAIN-2022-21 | refinedweb | 579 | 68.26 |
Greedy approach to find a single maximal clique in O(V^2) time complexity
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 40 minutes.
There can be more than one single maximal clique in a non-complete graph (since complete graph is a maximal clique itself). To find a single maximal clique in a graph we use a straightforward greedy algorithm.
Algorithm
In greedy approach of finding a single maximal clique we start with any Starting with an arbitrary clique (for instance, any single vertex or even the empty set), grow the current clique one vertex at a time by looping through the graph's remaining vertices.
For each vertex v that this loop examines, add v to the clique if it is adjacent to every vertex that is already in the clique, and discard v otherwise. The maximal clique can be different if we start with differnet vertex as a graph may have more than one maximal clique.
- Start from an arbitrary vertex
- Given a clique of size, repeat:
- Add a vertex randomly from the common neighbors of the existing clique
- If there is no common neighbors, stop and return the clique
The above algorithm runs in linear time in the size of the graph (Θ(n2) edges) because of:
- the ease of finding maximal cliques
- their potential small size
Following is an example graph with 6 vertices and 4 different maximal cliques. On selection of different arbitrary start vertex we may get different maximal clique as output of the algorithm.
Implementation
Code in Python3
from collections import defaultdict import random def find_single_clique(graph): clique = [] vertices = list(graph.keys()) rand = random.randrange(0, len(vertices), 1) clique.append(vertices[rand]) for v in vertices: if v in clique: continue isNext = True for u in clique: if u in graph[v]: continue else: isNext = False break if isNext: clique.append(v) return sorted(clique) graph = dict() graph['A'] = ['B', 'C', 'E'] graph['B'] = ['A', 'C', 'D', 'F'] graph['C'] = ['A', 'B', 'D', 'F'] graph['D'] = ['C', 'E', 'B', 'F'] graph['E'] = ['A', 'D'] graph['F'] = ['B', 'C', 'D'] clique = find_single_clique(graph) print('A maximal clique in the graph is: ', clique)
Output
A maximal clique in the graph is: ['B', 'C', 'D', 'F']
Complexity
Time Complexity
- The greedy algorithm takes O(n2) (i.e. O(Edges)) time in the worst case.
Space Complexity
- The greedy algoorithm takes O(n2) auxiliary space in the worst case.
Related articlesUsing Bron Kerbosch algorithm to find maximal cliques in O(3^(N/3))
Algorithm to find cliques of a given size k 【O(n^k) time complexity】 | https://iq.opengenus.org/greedy-approach-to-find-single-maximal-clique/ | CC-MAIN-2021-21 | refinedweb | 453 | 54.97 |
Last updated on September 30th, 2017 |
Ionic 2 Facebook login has become one of the most used methods to get users to sign-in to your hybrid app that it’d be dumb not knowing how to do it.
In this ‘short’ guide, I’m going to walk you through the process of getting someone who just downloaded your app to sign-in using Ionic 2 Facebook login credentials with Firebase.
We’ll break things down into three main steps:
- Step #1: We’ll create the app and connect it to Firebase.
- Step #2: We’ll log into your Facebook developer account and set everything up from there.
- Step #3: We’ll connect Ionic, Firebase, and Facebook together to create an authentication process.
Make sure to get the code directly from Github so you can follow along with the post:.
Step #1: Create your new Ionic App
1.1) Make sure your development environment is up to date
Before writing any code, we are going to take a few minutes to install everything we need to be able to build this app, that way we won’t have to be switching context between coding and installing.
The first thing we’ll do is install node.js make sure you get version 6.x, even tho V4 works for most users it will install the wrong version of typings and a lot of things will break in your app.
The second thing we’ll do is make sure we have Ionic and Cordova installed, we’ll do that by opening your terminal and typing:
$ npm install -g ionic cordova
Depending on your operating system (mostly if you run on Linux or Mac) you might have to add
sudo before the
npm install.... command.
1.2) Create the App
Now that you made sure everything is installed and up to date, you’ll create a new Ionic 2 app.
For this, you just need to (while still in your terminal) navigate to the folder you’d like to create it in.
For me, it’s my Development folder in my ~/ directory:
$ cd Development $ ionic start debtTracker blank --v2 $ cd debtTracker
What those lines there do is the following:
- First, you’ll navigate to the Development folder.
Second, you’ll create the new Ionic 2 app:
ionic startcreates the app.
debtTrackeris the name we gave it.
blanktells the Ionic CLI you want to start with the blank template.
--v2tells the Ionic CLI you want to create an Ionic 2 project instead of an Ionic 1 project.
- Third, you’ll navigate into the new
debtTrackerfolder, that’s where all of your app’s code is going to be.
From now on, whenever you are going to type something in the terminal it’s going to be in this folder unless I say otherwise.
1.3) The
npm packages that come with the project
When you use the Ionic CLI to create a new project, it’s going to do a lot of things for you, one of those things is making sure your project has the necessary
npm packages/modules it needs.
That means, the
start command is going to install ionic-angular all of the angular requirements and more, here’s what the
package.json file would look like:
{ "name": "ionic-hello-world", "author": "Ionic Framework", "homepage": "", "private": true, "scripts": { "ionic:build": "ionic-app-scripts build", "ionic:serve": "ionic-app-scripts serve" }, -native/core": "^3.2.2", "@ionic-native/splash-screen": "^3.2.2", "@ionic-native/status-bar": "^3.2.2", "@ionic/storage": "2.0.0", "ionic-angular": "2.2.0", "ionicons": "3.0.0", "rxjs": "5.0.1", "sw-toolbox": "3.4.0", "zone.js": "0.7.2" }, "devDependencies": { "@ionic/app-scripts": "1.1.4", "typescript": "2.0.9" }, "description": "fb-auth: An Ionic project", "cordovaPlugins": [ "cordova-plugin-device", "cordova-plugin-console", "cordova-plugin-whitelist", "cordova-plugin-splashscreen", "cordova-plugin-statusbar", "ionic-plugin-keyboard" ], "cordovaPlatforms": [] }
Depending on when you read this, these packages might change (specially version numbers) so keep that in mind, also you can leave a comment below if you have any questions/issues/problems with this.
1.4) Install Firebase
Open your terminal (you should already be in the project folder) and install the packages in this order:
$ npm install firebase --save
1.5) Import and Initialize
Now you can initialize firebase by going to
src/app/app.component.js and importing everything you need from Firebase:
You can open your
app.module.ts and import everything we’ll be using, and this is the only time you’ll see this file 🙂
import firebase from 'firebase';
And then add the initialize it inside the constructor:
constructor(platform: Platform) { firebase.initializeApp({ apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" }); platform.ready().then(() => { // Okay, so the platform is ready, and our plugins are available. // Here you can do any higher level native things you might need. StatusBar.styleDefault(); Splashscreen.hide(); }); }
In the end the file should look like this:
import { Component } from '@angular/core'; import { Platform } from 'ionic-angular'; import { StatusBar } from '@ionic-native/status-bar'; import { SplashScreen } from '@ionic-native/splash-screen'; import { HomePage } from '../pages/home/home'; import firebase from 'firebase'; @Component({ template: `<ion-nav [root]="rootPage"></ion-nav>` }) export class MyApp { rootPage = HomePage; constructor(platform: Platform, private statusBar: StatusBar, private splashScreen: SplashScreen) { firebase.initializeApp({ apiKey: "AIzaSyALKfevapBOYK202f6k5mPPfMrT1MHDv5A", authDomain: "bill-tracker-e5746.firebaseapp.com", databaseURL: "", storageBucket: "bill-tracker-e5746.appspot.com", messagingSenderId: "508248799540" }); platform.ready().then(() => { // Okay, so the platform is ready and our plugins are available. // Here you can do any higher level native things you might need. statusBar.styleDefault(); splashScreen.hide(); }); } }
We are using:
firebase.initializeApp({ apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" });
To initialize our Firebase app.
Right there your app should be able to run without any errors when you do
ionic serve
You can find your config data (the one that goes inside
firebase.initializeApp()) data in the Firebase’s Console.
You just go to the console, click on your app (or create a new one) and there it’s going to give you a few choices.
You’ll pick Add Firebase to your web app because remember we are going to use a mix of AF2 with the JS SDK.
If you are running intro trouble getting this to work, you can leave a comment below and let me know!
Step #2: Configure your Facebook Developer Console
2.1) Bad News First
Firebase authentication wasn’t entirely compatible with cordova. So the regular JS SDK or AF2 methods to do social sign-in don’t work with Ionic 2 apps yet.
2.2) Now the good news
There’s a workaround, and you need to add some specific operations, especially download a cordova plug-in for Facebook, where you can use their API to get credentials for your user, and then pass those credentials to Firebase.
So on the back-end it will be more work, it won’t be as cool as the docs show you, but on the front-end, your user never has to know 🙂
We need to work a few things with Facebook first to get this working, for example, you need to register as a developer and create an app, you’ll get some data there that you need to pass to the plugin later.
I know it might sound a bit difficult, but let’s just try and see how it goes, OK?
2.3) Create a Facebook App
The first thing you’ll need is to register for a Facebook application. You can go to and click on the shiny green button that says Create a New App
You’ll be greeted by a pop-up asking you which kind of app you want to create
You can either choose www or tell it to skip and use basic setup, remember, we just need the ID for the plug-in.
After that, you’ll get to name your app and generate an ID
Just fill the info and go, next you can check the data we’re looking for (or if you’re like me you’ll get asked to fill reCAPTCHA because FB thinks you’re a robot).
I created a second app because the first one had spaces in the name and FB still had me go through reCAPTCHA again 🙁
You’ll get redirected to your app’s dashboard, where you’ll get the information we need.
2.4) Get the Facebook Plugin
Now we need to get the plugin, detailed instructions are in the ionic-native docs.
But you’re going to open your terminal (by this point you should have your app created and set to work with Firebase and AF2) and type:
$ ionic plugin add cordova-plugin-facebook4 --variable APP_ID="123456789" --variable APP_NAME="myApplication"
Where
APP_ID is the id for the app we just created on the Facebook developers page, and
APP_NAME is its name.
Now we need to go back to the Facebook developer page to add both platforms, iOS, and Android, but first, they’re going to ask us for some info, let’s get it now.
Go to your app’s
config.xml file and look for this line of code:
<widget id="com.ionicframework.fbauth285351" version="0.0.1" xmlns="" xmlns:
We’re going to focus on the
id property, so take note of that id, for me, is
com.ionicframework.fbauth285351 and I’m sure it’s going to be different for you.
Now we need to install the
ionic-native package for Facebook:
$ npm install --save @ionic-native/facebook
And initialize it in
app.module.ts so you can use it throughout the app:
import { SplashScreen } from '@ionic-native/splash-screen'; import { StatusBar } from '@ionic-native/status-bar' import { Facebook } from '@ionic-native/facebook' @NgModule({ ..., providers: [ SplashScreen, StatusBar, Facebook ] }) export class AppModule {}
2.5) Add the Platforms
Now go back to your app’s dashboard on Facebook and click under settings you’ll see an option that says, add platform.
You’ll get a prompt asking you which platform, just choose iOS and you’ll see this
Just copy that id into the Bundled ID, then click on “Add Platform” again, this time choose Android and add the ID in the Google play package name.
You’ll get a warning, saying it can’t find the package name, but that’s because the app isn’t up yet, so no biggie.
2.6) Enable Firebase
You need to go to your Firebase Console and enable Facebook Authentication, just as you enabled email or anonymous sign-in.
Go to:
console.firebase.google.com/project/YOURPROJECTNAMEHERE/authentication/providers
Enable Facebook; it’s going to ask you for your app’s secret and ID, you can get those from your Facebook app dashboard.
Step #3 Ionic 2 Facebook Login
Now we have all the external configuration done, so it’s time to work on our app, go ahead and open the app in your favorite text editor (after being anti-Microsoft for about six years I’m using VSCode, great TS support 😛)
3.1) Create the template
The first thing we’ll do is edit the
HomePage template. We want to create a button there, something simple, something that lets our users click and then log-in with Facebook.
After that, we’re going to show them some info on their Facebook profile, and it’s going to look a bit like this.
<ion-header> <ion-navbar> <ion-title> Ionic Blank </ion-title> </ion-navbar> </ion-header> <ion-content padding> <h3 center> Facebook Auth Example </h3> <button ion-button center (click)="facebookLogin()"> Log In with Facebook </button> > </ion-content>
Where
<button> Log In with Facebook </button>
Calls the function to log-in with Facebook and
>
Is just a card that shows the user’s information, note that we are hiding this card unless the variable
userProfile exists, we’ll create that variable next in the .ts file.
3.2) Let’s get coding
Now move to
home.ts and you should see something like this:
import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { constructor(public navCtrl: NavController) {...} }
The first thing we’ll do is import Facebook’s plugin and Firebase.
import { Facebook } from '@ionic-native/facebook'; import firebase from 'firebase';
Now create the variable
userProfile and inject the Facebook provider into our constructor.
userProfile: any = null; constructor(public navCtrl: NavController, private facebook: Facebook) {...}
And now we need to do the actual Facebook login. This is going to consist of a few parts.
First, we need to call Facebook’s native plugin to get the user to log-in to Facebook.
Second, we need to get the token from that response and pass it to Firebase to use the
And third, after we successfully create the Firebase account, we’ll assign the value to
userProfile so we can display its data.
facebookLogin(){) }); }
By the end, your file should look something like this
import { Component } from '@angular/core'; import { Facebook } from '@ionic-native/facebook'; import { NavController } from 'ionic-angular'; import firebase from 'firebase'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { userProfile: any = null; constructor(public navCtrl: NavController, private facebook: Facebook) {} facebookLogin(): void {) }); } }
And after success, you should see all the information inside the app
By now you have a fully working sign-in process with Facebook using Firebase, now, It might not be ideal, but it’s what we can do now.
You just went through a lot:
First, you set-up your development environment and made sure everything was up-to-date.
Then you created a Facebook developer account and created an app there
And then you connected everything in your Ionic app to allow users to sign-up using their Facebook accounts 🙂
Pingback: Ionic Google Login: Step-by-step guide on getting it to work with Firebase()
Pingback: 10+ Angular 2/4 Facebook Tutorials | AngularJS 4U()
Pingback: How to fix getting a blank page when trying to use Facebook login?() | https://javebratt.com/ionic-2-facebook-login/ | CC-MAIN-2017-43 | refinedweb | 2,331 | 59.03 |
Help:Links
There are four kinds of links in MediaWiki:
- internal links to other pages in the same wiki
- external links to other websites
- interwiki links to other websites specifically registered as possible link targets
- interlanguage links to other websites registered as other language versions of the same wiki
This wiki does not use the fourth type of link, since we handle multiple languages differently.
Contents
Internal links
An internal link (from a "source" page to a "target" page on the same wiki) is formed by enclosing the name of the desired target page in [[double square brackets]]. If it is awkward to use the actual page title at that point in the source page text, a piped link can be used (see examples below).
When the source page is previewed or saved, the new link will be visible. If the target page already exists on this wiki, it will appear blue (or purple, if the page has already been visited by the reader); if not, it will appear red. (Following a "redlink" will bring up the page editor in which the page can be created. For more information about editing and creating pages, please see our other Help pages.)
Note that "selflinks", in which the source and target pages are the same, are not shown as links but instead displayed in bold. If you want to link to a particular place (i.e., a section heading) in the current page, use an "anchor" (see below), or use
[[#top|current page]], which always links to the top of the page.
Note that the first letter of the target page is case-insensitive (it is automatically capitalized in page titles but can be upper- or lower-case in links), and that spaces can be represented as underscores (typing an underscore in a link is equivalent to typing a space, but is not recommended, except possibly in piped links, since the underscore will be visible to the reader).
External links
An external link (to a "target" page on another website) is formed by enclosing the URL of the desired target page in [single square brackets]. If no other text is specified, the link is shown as a bracketed number; however, this is discouraged in favor of the equivalent of a piped link, which in this case is accomplished by simply separating the normal "link text" from the URL with a space (see examples below).
When the source page is previewed or saved, the new link will be visible followed by a small icon to indicate that the reader will leave the current wiki if they follow the link. Unlike with internal links, an external link will look the same whether or not the target page actually exists.
If the target website can be browsed in secure HTTP, it is preferable to omit the "https:" portion of the URL.
External links to internal pages
Linking to a wiki page whose URL contains parameters (set off by
? and
&) is most easily accomplished using an external link.
External link icons
If the link uses a special protocol or its target is a special kind of file (based on the file extension in the URL), it may be marked with a special icon (instead of the default showing an arrow).
Interwiki links
Interwiki links are internal-style links to external websites that have been registered in advance as useful targets for outgoing links. The most well known such site is the English Wikipedia.
Unlike other internal links, interwiki links do not rely on page existence detection, so an interwiki link will look the same whether the target page exists or not.
Interlanguage links
Interlanguage links (in which the target is a page in a different language hosted at an entirely different wiki) are not used on the Gentoo Linux wiki. Instead the Translate extension is used to maintain pages in languages other than English. Please see the Gentoo Wiki:FAQ for more information.
Whatlinkshere
For including a section of pages linking to the present page, use the {{Whatlinkshere}} template. This appears useful within noinclude tags for pages transcluded to others.
<noinclude> What links here: {{Special:Whatlinkshere/Help:Links|namespace=Help}} </noinclude>
See also
- Manual:Linked images on mediawiki.org
- Help:Links on meta.wikimedia.org | https://wiki.gentoo.org/wiki/Help:Links | CC-MAIN-2020-29 | refinedweb | 709 | 55.68 |
Apache::HTPL - Apache mod_perl driver for HTPL.
After installed, this module will boost the performance of HTPL by having pages compiled in memory and run again and again. It utilizes the Apache mod_perl extension, and can't be otherwise used.
The HTPL page translator is compiled into this module as an XSUB extension. (I could not execute the page translator as a child process - any advices?)
The easiest way to install HTPL under mod_perl is to ask for it when running the configure script:
./configure --enable-modperl
Suppose installation was done as root (which is highly recommended), configuration file will be initialized.
Otherwise, add the following lines to httpd.conf:
PerlModule Apache::HTPL
<Files ~ "*.htpl"> SetHandler perl-script PerlHandler Apache::HTPL </Files>
Apache::HTPL works similarly to Apache::Registry - it creates a namespace and a subroutine for every page. It will attempt to clear the namespace between page calls, to allow "dirty" scripting by assuming empty variables. It does so by consulting the stash of a package.
Consistent value can be stored on different namespaces - recommended is Apache::HTPL::Vars or the alike. Do not use it for stateful sessions, as Apache spawns several processes on a server. Use the internal persistent objects to keep stateful sessions, via the %application and %session hashes. Global variables can be used to initialized tables, for example.
The Database module will cache database connections and reuse them. Since this uses up one database connection per Apache process, you can disable this feature by editing you configuration file and changing the value of $htpl_db_save.
The htpl-config.pl file will still be stored on the cgi-bin directory while using HTPL on mod_perl mode, as will be the htpldbg page translator. The installation will always create htpl.cgi. | http://search.cpan.org/~schop/htpl/mod_perl/HTPL.pm | CC-MAIN-2015-22 | refinedweb | 293 | 57.37 |
The.
The first thing to do is to define Defensive Programming and the first definition I came across was in what is now possibly a legendary book: Writing Solid Code by Steve Maguire published by Microsoft Press. I read this book many years ago when I was a C programmer, which was then the defacto language of choice. In the book Steve demonstrates the use of an
_Assert macro:
/* Borrowed from Complete Code by Steve Maguire */ #ifdef DEBUG void _Assert(char *,unsigned) /* prototype */ #define ASSERT(f) \ if(f) \ { } \ else _Assert(__FILE__,__LINE__) #else #define ASSERT(f) #endif // ...and later on.. void _Assert(char *strFile,unsigned uLine) { fflush(NULL); fprintf(stderr, '\nAssertion failed: %s, line %u\n',strFile,uLine); fflush(stderr); abort(); } /////// ...and then in your code void my_func(int a). { ASSERT(a != 0); // do something... }
…as his definition of defensive programming. The idea here is that we’re defining a C macro that when DEBUG is turned on,
my_func(…) will test it’s input using the
ASSERT(f) and that will call the
_Assert(…) function if the condition fails. Hence, when in DEBUG mode, in this sample
my_func(int a) has the ability to abort execution if arg
a is zero. When DEBUG is switched off, checked aren’t carried out, but the code is leaner and quicker; something which was probably more of a consideration back in 1993.
Looking at this definition, several things come to mind. Firstly, this book was published in 1993, so is this still valid? It wouldn’t be a good idea to kill Tomcat by using a
System.exit(-1) if one of your users typed in the wrong input! Secondly, Java being more recent is also more sophisticated, has exceptions and exception handlers, so instead go aborting the program we’d throw an exception that would, for example, display an error page highlighting bad inputs. The main point that comes to mind, however, is that this definition of defensive programming sounds a lot like fail-fast to me, in fact it’s identical.
This isn’t the first time that I’ve heard programmers complain about defensive programming, so, why has it got such a bad reputation? Why did the elrang talk presenter denigrate it so much? My guess is that there’s good use of defensive programming and bad use of defensive programming. Let me explain with some code…
In this scenario I’m writing a Body Mass Index (BMI) calculator for a program that tells users whether or not they’re over weight. A BMI value between 18.5 to 25 is apparently okay whilst anything over 25 ranges from overweight to severely obese with lots of life limiting issues. The BMI calculation uses the following simple formula:
BMI = weight (kg) / (height(m)2)
The reason I chosen this formula is that is presents the possibility of a divide by zero error, which the code I write must defend against.
public class BodyMassIndex { /** * Calculate the BMI using Weight(kg) / height(m)2 * * @return Returns the BMI to four significant figures eg nn.nn */ public Double calculate(Double weight, Double height) { Validate.notNull(weight, "Your weight cannot be null"); Validate.notNull(height, "Your height cannot be null"); Validate.validState(weight.doubleValue() > 0, "Your weight cannot be zero"); Validate.validState(height.doubleValue() > 0, "Your height cannot be zero"); Double tmp = weight / (height * height); BigDecimal result = new BigDecimal(tmp); MathContext mathContext = new MathContext(4); result = result.round(mathContext); return result.doubleValue(); } }
The code above uses the idea put forward in Steve’s 1993 definition of defensive programming. When the program calls
calculate(Double weight,Double height) four validations are carried out, testing the state of each input argument and throwing an appropriate exception on failure. As this is the 21st century I didn’t have to define my own validation routines, I simply used those provided by the Apache commons-lang3 library and imported:
import org.apache.commons.lang3.Validate;
…and added:
<dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.1</version> </dependency>
…to my pom.xml.
The Apache commons lang library contains the
Validate class, which provides some basic validation. If you need more sophisticated validation algorithms take a look a the Apache commons validator library.
Once validated the
calculate(…) method calculates the BMI and rounds it to four significant figures (e.g. nn.nn). It then returns the result to the caller. Using
Validate allows me to write lots of JUnit tests to ensure that everything goes well in case of trouble and to differentiate between each type of failure:
public class BodyMassIndexTest { private BodyMassIndex instance; @Before public void setUp() throws Exception { instance = new BodyMassIndex(); } @Test public void test_valid_inputs() { final Double expectedResult = 26.23; Double result = instance.calculate(85.0, 1.8); assertEquals(expectedResult, result); } @Test(expected = NullPointerException.class) public void test_null_weight_input() { instance.calculate(null, 1.8); } @Test(expected = NullPointerException.class) public void test_null_height_input() { instance.calculate(75.0, null); } @Test(expected = IllegalStateException.class) public void test_zero_height_input() { instance.calculate(75.0, 0.0); } @Test(expected = IllegalStateException.class) public void test_zero_weight_input() { instance.calculate(0.0, 1.8); } }
One of the ‘advantages’ of the C code is that you can turn the
ASSERT(f) of and on using a compiler switch. If you need to do this in Java then take a look at using Java’s
assert keyword.
The above sample is what I’d hope we’d agree is the well written sample – the good code. So, what’s needed now is the badly written sample. The main criticism of defensive programming is that it can hide errors and that’s very true, if you write bad code.
public class BodyMassIndex { /** * Calculate the BMI using Weight(kg) / height(m)2 * * @return Returns the BMI to four significant figures eg nn.nn */ public Double calculate(Double weight, Double height) { Double result = null; if ((weight != null) && (height != null) && (weight > 0.0) && (height > 0.0)) { Double tmp = weight / (height * height); BigDecimal bd = new BigDecimal(tmp); MathContext mathContext = new MathContext(4); bd = bd.round(mathContext); result = bd.doubleValue(); } return result; } }
The code above also checks against both
null and zero arguments, but it does so using the following if statement:
if ((weight != null) && (height != null) && (weight > 0.0) && (height > 0.0)) {
Looking on the bright side, the code won’t crash if the inputs are incorrect, but it won’t tell the caller what’s gone wrong, it’ll simply hide the error and return
null. Although it hasn’t crashed, you have to ask what’s the caller going to do will a
null return value? It’ll either have to ignore the problem or process the error there and using something like this:
@Test public void test_zero_weight_input_forces_additional_checks() { Double result = instance.calculate(0.0, 1.8); if (result == null) { System.out.println("Incorrect input to BMI calculation"); // process the error } else { System.out.println("Your BMI is: " + result.doubleValue()); } }
If this ‘bad’ coding technique is used throughout a code base, then there will be a large amount of extra code required to check each return value. It’s a good idea to NEVER return null values from a method. For more information take a look at this set of blogs. In conclusion, I really think that there’s no difference between defensive programming and fail-fast programming as they’re really the same thing. Isn’t there, as always, just good coding and bad coding? I’ll let you decide. This code sample is available on Github.
1There’s always a paradigm shift in thinking when learning a new language. There will be the point where the penny drops and you ‘get it’, whatever it is. | https://www.javacodegeeks.com/2013/04/does-defensive-programming-deserve-such-a-bad-name.html | CC-MAIN-2016-50 | refinedweb | 1,274 | 57.16 |
Red Hat Bugzilla – Bug 233488
attempt to use mouse with ncurses inside screen causes erratic behavior
Last modified: 2007-11-30 17:11:59 EST
Description of problem:
programs using the mouse click feature of ncurses act crazy when run from
within /usr/bin/screen. When mousemask() is used after initscr() in a
ncurses-based program, getch() will go crazy. See example code below.
Version-Release number of selected component (if applicable):
ncurses-5.5-24.20060715
ncurses-devel-5.5-24.20060715
screen-4.0.3-2.fc6
How reproducible:
by compiling and executing the following example code (cursebug.c):
#include <stdio.h>
#include <ncurses.h>
int main(int argc, char **argv) {
initscr();
mousemask(BUTTON1_PRESSED | BUTTON1_CLICKED, NULL);
halfdelay(10);
for (;;) {
int key = getch();
switch (key) {
case 'q':
case 'Q':
return 0;
case KEY_MOUSE:
fprintf(stderr, "Mouse Click.\n");
break;
default:
fprintf(stderr, "default: key = %d\n", key);
break;
}
}
endwin();
return 0;
}
Steps to Reproduce:
1. compile code: gcc -o cursebug cursebug.c -Wall -lncurses
2. in a window, start screen: /usr/bin/screen
3. then, start example from within screen: ./cursebug 2>cursebug.log
3. in a separate window, view log: tail -f cursebug.log
Actual results:
getch() returns -1 in a very tight loop
Expected results:
getch() should return -1 once every second (as per halfdelay(10)), and
actual key presses and mouse clicks when entered. This works fine when
example is run directly from console or an xterm, but not from within
screen.
Additional info:
Thanks for the report, looks like a gpm problem.
A gpm prob indeed. I broke it with the deadsocket patch. There was a better way
to do it and I hope I took the One this time. Should be fixed in the upcoming
update. You can try out the rawhide srpm in the meantime, it's equivalent.
gpm-1.20.1-81.fc6 has been pushed for fc6, which should resolve this issue. If these problems are still present in this version, then please make note of it in this bug report.
gpm-1.20.1-81.fc5 has been pushed for fc5, which should resolve this issue. If these problems are still present in this version, then please make note of it in this bug report.
sorry for taking this long to respond -- just for the record, gpm-1.20.1-81.fc6
does indeed fix the problem. Thanks ! | https://bugzilla.redhat.com/show_bug.cgi?id=233488 | CC-MAIN-2017-26 | refinedweb | 398 | 75.91 |
#include <CentroidDisulfidePotential.hh>
This class scores centroid disulfide bonds It is intended to be a singleton with a single instance held by ScoringManager.
The energy functions are derived from those present in Rosetta++
Constructor
Deconstructor
Decide whether there is a disulfide bond between two residues.
Does not require that the residues be cysteines, so if this is important you should check for CYS first. (The relaxed requirements are useful for design.)
Referenced by protocols::protein_interface_design::movers::DisulfideMover::disulfide_list().
Calculates scoring terms for the disulfide bond specified.
Referenced by core::scoring::disulfides::CentroidDisulfideEnergy::residue_pair_energy().
Calculates scoring terms and geometry.
If a full atom pose is given, centroid_distance_score will be zero.
If one of the residues is glycine it will be replaced with alanine for the scores which require a CB atom. | https://www.rosettacommons.org/manuals/archive/rosetta3.4_user_guide/d1/da7/classcore_1_1scoring_1_1disulfides_1_1_centroid_disulfide_potential.html | CC-MAIN-2017-17 | refinedweb | 130 | 50.73 |
Summary: Mediawiki XML file version 0.4 does not validate against its own DTD file Product: MediaWiki Version: 1.16.0 Platform: All OS/Version: All Status: NEW Severity: normal Priority: Normal Component: Export/Import AssignedTo: wikibugs-l@lists.wikimedia.org ReportedBy: rodrigospr...@gmail.com Created attachment 7778 --> Patch to fix issues on Mediawiki DTD Hi, I'm trying to validate a Mediawiki XML file against its DTD file but the validation is failing. I'm trying using PHP DOMDocument, I haven't tried the validation with other tools so I can't be sure if the problem is on PHP or Mediawiki XML file, but I guess it is more likely to be on Mediawiki. I'm testing with the attached script (testMediawikiXml.php). When I try to validate the XML from I get the following error: Element '{}element': The attribute 'name' is required but missing This error can be fixed by commenting line 119 of. The content of this line is: <element minOccurs="0" maxOccurs="1" type="mw:DiscussionThreadingInfo" /> I guess the best solution is to add the "name" attribute but I haven't investigate and I don't know much about DTD to know what should be the value of the "name" attribute. If I try to run the script again another error occurs: Element '{}namespace', attribute 'case': The attribute 'case' is not allowed. To fix this one I have added the following line below line 92: <attribute name="case" type="string" /> After those two changes to the DTD file I'm able to validate the XML file. I'm attaching the script I'm using to test and a patch with the changes I made to the DTD file. I guess that the second change is ok but the first issue need to be properly fixed (instead of just commenting the line). Thanks, Rodrigo. -- Configure bugmail: ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org | https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg51448.html | CC-MAIN-2019-04 | refinedweb | 342 | 60.35 |
Icon API
API documentation for the React Icon component. Learn about the available props and the CSS API.
Import
You can learn about the difference by reading this guide on minimizing bundle size.
import Icon from '@mui/material/Icon'; // or import { Icon } from '@mui/material';
Component nameThe name
MuiIconcan be used when providing default props or style overrides in the theme.
Props
Props of the native component are also available.
The
refis forwarded to the root element.
CSS
You can override the style of the component using one of these customization options:
- With a global class name.
- With a rule name as part of the component's
styleOverridesproperty in a custom theme. | https://mui.com/material-ui/api/icon/ | CC-MAIN-2022-40 | refinedweb | 112 | 57.77 |
I keep on getting an error whenever I try to assign character arrays to a postition in another array using a variable to say what position in the second array I want the first array to be stored in. It is hard to say what I mean in a simple way in English. Below is the code for my program. I only included it so you have some idea what's going on in it; you shouldn't need to understand how all of the embedded for loops work. I would GREATLY apprectiate your help.
Code:/*Program to take sentence user writes and divide into seperate words and store each word in an array Skipped commenting first section because it is pretty obvious what's going on*/ #include <iostream> using namespace std; int words(); char sentence[150]; int main() { cout << "Enter any words:" << endl; cin.getline(sentence, 150); cout << sentence; return 0; } int words() { int wordCount = 0; char wordGather = 15; char words[10]; int sentenceLength = strlen(sentence); for(int c = 0; c < sentenceLength; c++) //Start checking for spaces. { if(sentence[c] == ' ') //If a space occurs, { // for(int x = c; x <= 0; x--) //Count down from point in array { //sentence where a space occured. int y = 0; // y = y + 1; // if(sentence[x] == ' ') //If a space occurs, stop checking. { // break; // } // else //If no spaces occur, assign { //identified word's chars to array //'wordGather'. wordGather[y] = sentence[x]; char words[wordsNumber] = wordGather; //Assign word } //collected in 'wordGather' to //space <wordCount> in array //'words'. /* In the else statement closest to the end I get error 'illegal constant expression' dealing with the line 'char words[wordsNumber] = wordGather;' and the error 'pointer/array required' for the line 'wordGather[y] = sentence[x];' */ } wordCount = wordCount + 1; } } } | http://cboard.cprogramming.com/cplusplus-programming/45788-newb-question-why-cant-i-ask-position-array-using-variable.html | CC-MAIN-2016-30 | refinedweb | 289 | 57.4 |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.10-1
- unstable 5.10-1
NAME¶log1p, log1pf, log1pl - logarithm of 1 plus argument
SYNOPSIS¶
#include <math.h>
double log1p(double x); float log1pf(float x); long double log1pl(long double x);Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
log1p():
log1pf(), log1pl():
DESCRIPTION¶These functions return a value equivalent to
log (1 + x)
The result is computed in a way that is accurate even if the value of x is near zero.
RETURN VALUE¶¶¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶C99, POSIX.1-2001, POSIX.1-2008.
BUGS¶Before version 2.22, the glibc implementation did not set errno to EDOM when a domain error occurred.
Before version 2.22, the glibc implementation did not set errno to ERANGE when a range error occurred. | https://manpages.debian.org/buster-backports/manpages-dev/log1p.3.en.html | CC-MAIN-2021-04 | refinedweb | 155 | 61.63 |
I want to make this program that acts as a bank, how do I make sure the correct ID number must be entered with the correct pin and have it depending on the id you entered print hello then their name and prompt how much money they have in the bank.
attempts = 0
store_id = [1057, 2736, 4659, 5691, 1234, 4321]
store_name = ["Jeremy Clarkson", "Suzanne Perry", "Vicki Butler-Henderson", "Jason Plato"]
store_balance = [172.16, 15.62, 23.91, 62.17, 131.90, 231.58]
store_pin = [1057, 2736, 4659, 5691]
start = int(input("Are you a member of the Northern Frock Bank?\n1. Yes\n2. No\n"))
if start == 1:
idguess = ""
pinguess = ""
while (idguess not in store_id) or (pinguess not in store_pin):
idguess = int(input("ID Number: "))
pinguess = int(input("PIN Number: "))
if (idguess not in store_id) or (pinguess not in store_pin):
print("Invalid Login")
attempts = attempts + 1
if attempts == 3:
print("This ATM has been blocked for too many failed attempts.")
break
elif start == 2:
name = str(input("What is your full name?: "))
pin = str(input("Please choose a 4 digit pin number for your bank account: "))
digits = len(pin)
balance = 100
while digits != 4:
print("That Pin is Invalid")
pin = str(input("Please choose a 4 digit pin number for your bank account: "))
digits = len(pin)
store_name.append(name)
store_pin.append(pin)
I'm very impressed by how much you've elaborated on your program. Here's how I would view your solution.
So to create a login simulation, I would instead use a dictionary. That way you can assign an ID to a PIN. For example:
credentials = { "403703": "121", "3900": "333", "39022": "900" }
Where your ID is on the left side of the colon and the PIN is on the right. You would also have to assign the ID to a name that belongs to that ID using, you guessed it, a dictionary!
bankIDs = { "403703": "Anna", "3900": "Jacob", "39022": "Kendrick" }
Now that you've done that, you can create your virtual login system using if/else control flow. I've made my code like this:
attempts = 0 try: while attempts < 3: id_num = raw_input("Enter your ID: ") PIN = raw_input("Password: ") if (id_num in credentials) and (PIN == credentials[id_num]): print "login success." login(id_num) else: print "Login fail. try again." attempts += 1 if attempts == 3: print "You have reached the maximum amount of tries." except KeyboardInterrupt: print "Now closing. Goodbye!"
Note the try and except block is really optional. You could use the
break operator like you did in your code if you wanted to, instead. I just like to put a little customization in there (Remember to break out of your program is CTRL-C).
Finally, Python has a way of making life easier for people by using functions. Notice I used one where I put
login(id_num). Above this while loop you'll want to define your login so that you can display a greeting message for that particular person. Here's what I did:
def login(loginid): print "Hello, %s!" % bankIDs[loginid]
Simple use of string formatting. And there you have it. The same can be done with displaying that person's balance. Just make the dictionary for it, then print the code in your login definition. The rest of the code is good as it is. Just make sure you've indented properly your while-loop inside the elif on the bottom of your code, and your last 2 lines as well. Hope I helped. Cheers! | https://codedump.io/share/h6U8YTCkwjRU/1/bank-atm-program-login | CC-MAIN-2017-30 | refinedweb | 579 | 80.11 |
Quick start¶
The following steps will help you get started with HPX.
Installing HPX¶
The easiest way to install HPX on your system is by choosing one of the steps below:
vcpkg
You can download and install HPX using the vcpkg dependency manager:
$ vcpkg install hpx
Spack
Another way to install HPX is using Spack:
$ spack install hpx
Fedora
Installation can be done with Fedora as well:
$ dnf install hpx*
Arch Linux
HPX is available in the Arch User Repository (AUR) as
hpxtoo.
More information or alternatives regarding the installation can be found in the HPX build system, a detailed guide with thorough explanation of ways to build and use HPX.
Hello, World!¶
To get started with this minimal example you need to create a new project directory and a file
CMakeLists.txt with the contents below in order to build an executable using CMake and HPX:
cmake_minimum_required(VERSION 3.18) project(my_hpx_project CXX) find_package(HPX REQUIRED) add_executable(my_hpx_program main.cpp) target_link_libraries(my_hpx_program HPX::hpx HPX::wrap_main HPX::iostreams_component)
The next step is to create a
main.cpp with the contents below:
// Including 'hpx/hpx_main.hpp' instead of the usual 'hpx/hpx_init.hpp' enables // to use the plain C-main below as the direct main HPX entry point. #include <hpx/hpx_main.hpp> #include <hpx/iostream.hpp> int main() { // Say hello to the world! hpx::cout << "Hello World!\n" << hpx::flush; return 0; }
Then, in your project directory run the following:
$ mkdir build && cd build $ cmake -DCMAKE_PREFIX_PATH=/path/to/hpx/installation .. $ make all $ ./my_hpx_program
$ ./my_hpx_program Hello World!
The program looks almost like a regular C++ hello world with the exception of
the two includes and
hpx::cout.
When you include
hpx_main.hppHPX makes sure that
mainactually gets launched on the HPX runtime. So while it looks almost the same you can now use futures,
async, parallel algorithms and more which make use of the HPX runtime with lightweight threads.
hpx::coutis a replacement for
std::coutto make sure printing never blocks a lightweight thread. You can read more about
hpx::coutin The HPX I/O-streams component.
Note
You will most likely have more than one
main.cppfile in your project. See the section on Using HPX with CMake-based projects for more details on how to use
add_hpx_executable.
HPX::wrap_mainis required if you are implicitly using
main()as the runtime entry point. See Re-use the main() function as the main HPX entry point for more information.
HPX::iostreams_componentis optional for a minimal project but lets us use the HPX equivalent of
std::cout, i.e., the HPX The HPX I/O-streams component functionality in our application.
You do not have to let HPX take over your main function like in the example. See Starting the HPX runtime for more details on how to initialize and run the HPX runtime.
Caution
When including
hpx_main.hpp the user-defined
main gets renamed and
the real
main function is defined by HPX. This means that the
user-defined
main must include a return statement, unlike the real
main. If you do not include the return statement, you may end up with
confusing compile time errors mentioning
user_main or even runtime
errors.
Writing task-based applications¶
So far we haven’t done anything that can’t be done using the C++ standard library. In this section we will give a short overview of what you can do with HPX on a single node. The essence is to avoid global synchronization and break up your application into small, composable tasks whose dependencies control the flow of your application. Remember, however, that HPX allows you to write distributed applications similarly to how you would write applications for a single node (see Why HPX? and Writing distributed HPX applications).
If you are already familiar with
async and
futures from the C++ standard
library, the same functionality is available in HPX.
The following terminology is essential when talking about task-based C++ programs:
lightweight thread: Essential for good performance with task-based programs. Lightweight refers to smaller stacks and faster context switching compared to OS threads. Smaller overheads allow the program to be broken up into smaller tasks, which in turns helps the runtime fully utilize all processing units.
async: The most basic way of launching tasks asynchronously. Returns a
future<T>.
future<T>: Represents a value of type
Tthat will be ready in the future. The value can be retrieved with
get(blocking) and one can check if the value is ready with
is_ready(non-blocking).
shared_future<T>: Same as
future<T>but can be copied (similar to
std::unique_ptrvs
std::shared_ptr).
continuation: A function that is to be run after a previous task has run (represented by a future).
thenis a method of
future<T>that takes a function to run next. Used to build up dataflow DAGs (directed acyclic graphs).
shared_futures help you split up nodes in the DAG and functions like
when_allhelp you join nodes in the DAG.
The following example is a collection of the most commonly used functionality in HPX:
#include <hpx/local/algorithm.hpp> #include <hpx/local/future.hpp> #include <hpx/local/init.hpp> #include <iostream> #include <random> #include <vector> void final_task(hpx::future<hpx::tuple<hpx::future<double>, hpx::future<void>>>) { std::cout << "in final_task" << std::endl; } int hpx_main() { // A function can be launched asynchronously. The program will not block // here until the result is available. hpx::future<int> f = hpx::async([]() { return 42; }); std::cout << "Just launched a task!" << std::endl; // Use get to retrieve the value from the future. This will block this task // until the future is ready, but the HPX runtime will schedule other tasks // if there are tasks available. std::cout << "f contains " << f.get() << std::endl; // Let's launch another task. hpx::future<double> g = hpx::async([]() { return 3.14; }); // Tasks can be chained using the then method. The continuation takes the // future as an argument. hpx::future<double> result = g.then([](hpx::future<double>&& gg) { // This function will be called once g is ready. gg is g moved // into the continuation. return gg.get() * 42.0 * 42.0; }); // You can check if a future is ready with the is_ready method. std::cout << "Result is ready? " << result.is_ready() << std::endl; // You can launch other work in the meantime. Let's sort a vector. std::vector<int> v(1000000); // We fill the vector synchronously and sequentially. hpx::generate(hpx::execution::seq, std::begin(v), std::end(v), &std::rand); // We can launch the sort in parallel and asynchronously. hpx::future<void> done_sorting = hpx::sort(hpx::execution::par( // In parallel. hpx::execution::task), // Asynchronously. std::begin(v), std::end(v)); // We launch the final task when the vector has been sorted and result is // ready using when_all. auto all = hpx::when_all(result, done_sorting).then(&final_task); // We can wait for all to be ready. all.wait(); // all must be ready at this point because we waited for it to be ready. std::cout << (all.is_ready() ? "all is ready!" : "all is not ready...") << std::endl; return hpx::local::finalize(); } int main(int argc, char* argv[]) { return hpx::local::init(hpx_main, argc, argv); }
Try copying the contents to your
main.cpp file and look at the output. It can
be a good idea to go through the program step by step with a debugger. You can
also try changing the types or adding new arguments to functions to make sure
you can get the types to match. The type of the
then method can be especially
tricky to get right (the continuation needs to take the future as an argument).
Note
HPX programs accept command line arguments. The most important one is
--hpx:threads
=N to set the number of OS threads used by
HPX. HPX uses one thread per core by default. Play around with the
example above and see what difference the number of threads makes on the
sort function. See Launching and configuring HPX applications for more details on
how and what options you can pass to HPX.
Tip
The example above used the construction
hpx::when_all(...).then(...). For
convenience and performance it is a good idea to replace uses of
hpx::when_all(...).then(...) with
dataflow. See
Dataflow for more details on
dataflow.
Tip
If possible, try to use the provided parallel algorithms instead of writing your own implementation. This can save you time and the resulting program is often faster.
Next steps¶
If you haven’t done so already, reading the Terminology section will help you get familiar with the terms used in HPX.
The Examples section contains small, self-contained walkthroughs of example HPX programs. The Local to remote example is a thorough, realistic example starting from a single node implementation and going stepwise to a distributed implementation.
The Manual contains detailed information on writing, building and running HPX applications. | https://hpx-docs.stellar-group.org/branches/master/html/quickstart.html | CC-MAIN-2021-49 | refinedweb | 1,468 | 58.18 |
Using Google Cloud Vision With Expo and React Native
I’ve been wanting to play around with Google Cloud Vision for a while, and finally got a chance to try it in a react native app. I knew it was powerful, but it’s kinda like VR — you don’t really get it until you try it with your own hands. It took me a while to wade through all the configuration and setup, so here are my notes to hopefully help save others some time. You can also try out the cloud vision demo on expo or view the cloud vision react native example on github.
Here’s the simplest way I found to start playing with Google Cloud Vision inside a react native app:
Create a new React Native app using Expo:
If you’re not yet familiar with Expo, it’s an incredible set of tools that makes it easy to create and publish react native apps. Follow the excellent Get Started With Expo instructions and select the “tabs” template option to create your react native app.
Configuration/ Setup:
- Setup a new Firebase project. For simplicity I temporarily made my firebase data public, more background about firebase and expo here (no need to follow all those steps).
- Add the firebase sdk to your expo project:
npm install — save firebase
- Create a folder called
configat the root of your application and create a file called
environment.jsin it. We’ll store all our api keys and env variables in environment.js and add it to .gitignore so it’s not included in our repo. Examples are pasted at the bottom of this post and also in the github repo I created. Note: this is just personal preference, there are many other ways and this only keeps your secrets out of source control, not the app binary.
- Create a folder called
utilsand create a file in it called
firebase.js. Similar to above, example at bottom of post and this is just my personal preference for organizing the firebase stuff.
- Sign up for Google Cloud Platform, get your api key, and add it to your
environment.jsfile. This part actually took me a while, you need to add billing info and avoid getting confused by the huge amount of apis and authentication options they have. You should hopefully end up at a url similar to be able to click “Credentials” to get your api key:
Coding:
Now, we’re ready to actually code! The first step is getting our app to take a picture and upload it to our firebase account. I found this incredibly helpful expo firebase storage upload example and started by deleting everything in the
LinksScreen.js file and just pasting that bad boy in. Before going further you should make sure you understand what that code does — you’re basically using the built in expo tools to access the device camera, then uploading the image to your firebase project account.
Delete the
import * as firebase from ‘firebase’; on line 16 and instead import the two files we created:
import Environment from "../config/environment";
import firebase from "../utils/firebase";
Delete lines 20–31 (we initialized firebase already). At this point, you should be able to click the “Links” tab in your app, take a photo, and upload it to your firebase account. If not, double check your api keys are correct and you’re correctly importing them in.
Now the fun part: let’s post that image to the cloud vision api and get back some slightly creepy google magic. First add
googleResponse: null to your initial state (around line 25). This is where we’ll store the returned data from the api. Then add a button:
<Button
onPress={() => this.submitToGoogle()}
That posts our image to the cloud vision api:);
}
};
}
You should now be seeing all the returned data in your logs, and you can also display it on the screen:
{googleResponse && (
<Text
onPress={this._copyToClipboard}
onLongPress={this._share}
>
JSON.stringify(googleResponse.responses)}
</Text>
)}
While a huge blob of JSON is fun, if you want something more than a blank stare when showing it to a non programmer, you can display a list of the returned labels:
{this.state.googleResponse && (
<FlatList
data={this.state.googleResponse.responses[0].labelAnnotations}
extraData={this.state}
keyExtractor={this._keyExtractor}
renderItem={({ item }) => <Text>Item: {item.description}</Text>}
/>
)}
That’s it! There’s no styling and it ain’t pretty, but you should now be able to take a picture and get back an insane amount of info from google, such as the emotional state of your cat:
DISCLAIMER: this is only meant as a quick and dirty example, and is not secure nor meant for production. Keep your api keys secret and make sure to disable public access to firebase when you’re done, otherwise anyone can view and upload files to your account.
Credits/ Resources
I found the following posts really helpful as I pulled everything together:
This post from @iammosespaulr helped get me started with the big picture and posting to the cloud vision api:
Excellent Expo firebase uploading example:
This post from @wcandillon pointed me to that great example above:
This helps clarify a bit about storing env variables in expo apps:
Example files:
utils/firebase.js
import Environment from "../config/environment";;
config/environment.js
Note: this is my setup for easily adding a future production environment, it’s overkill for now but makes it easier for you to add additional expo release channels in the future.
var environments = {
staging: {
FIREBASE_API_KEY: "blabla",
FIREBASE_AUTH_DOMAIN: "blabla.firebaseapp.com",
FIREBASE_DATABASE_URL: "",
FIREBASE_PROJECT_ID: "blabla",
FIREBASE_STORAGE_BUCKET: "blabla.appspot.com",
FIREBASE_MESSAGING_SENDER_ID: "blabla",
GOOGLE_CLOUD_VISION_API_KEY: "blabla"
},; | https://medium.com/@mlapeter/using-google-cloud-vision-with-expo-and-react-native-7d18991da1dd | CC-MAIN-2020-34 | refinedweb | 938 | 60.04 |
0
Im having problem with this code I made for finding the average rain fall. When the program starts. it is suppose to go through each month and you are able to input the rainfall. When the months show up in the CMD it is just a bunch of numbers and letters. I believe the array is set up right. maybe im missing something.
# include <iostream> # include <cstring> using namespace std; int main () { double avgRain = 0; double rainSum = 0; int count = 0; int monthlyTotals[12]; string monthNames[] = {"January","Febuary","March","April","May","June","July","August","September","October","November","December"}; cout << "Please enter the amount of rainfall for each month, and press enter after each: " << endl; for (count = 0; count <= 12; count++) { cout << monthNames << " : "; cin >> monthlyTotals[count]; } for (count =0; count <=12; count++) rainSum = rainSum + monthlyTotals[count]; avgRain = rainSum / 12; for (count = 0; count <=12; count++) { cout << "Output : " << endl; cout << monthNames << "\t" << monthlyTotals[count] << endl; } cout << rainSum << "is the total rainfall for the year\n" << avgRain << " is the average for the year.\n"; return 0; }
if anything else is noticed to be wrong or could be done better. plz reply. | https://www.daniweb.com/programming/software-development/threads/398779/trying-to-find-average-rain-fall-months-not-showing | CC-MAIN-2017-09 | refinedweb | 189 | 66.57 |
A considerable number of Scala developers are attracted by the promises of type safety and functional programming in Scala, as can be seen by the adoption of libraries like Cats and Shapeless. When building an HTTP API, the choices for a pure functional programming approach are limited. Finch is a good candidate to fill that space and provide a full-stack FP experience.
My introduction to Finch happened when I was reviewing talks from the last Scala eXchange 2016 and I stumbled upon this good talk by Sofia Cole and Kingsley Davies.
They covered a lot of ground in just 45 minutes, but the thing that caught my attention was Kingsley was speaking about Finch. Probably I noticed it because I’ve used other frameworks before, either formally or in exploratory pet-projects, but I wasn’t aware of the existence of Finch itself.
So, as usual when facing a new technology, I tried to understand the so called ‘angle of Finch’. Why should I care about it, when there are plenty of solid alternatives? Let’s do a high-level overview of Finch to see if it is a library worth spending our time on.
Functional Programming in the Web
If we open Finch’s README file we see that Finch describes itself as:
Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable HTTP APIs. Its mission is to provide the developers simple and robust HTTP primitives being as close as possible to the bare metal Finagle API.
It can’t be stated more clearly: Finch is about using functional programming to build HTTP APIs. If you are not interested in functional programming you should stop reading, as you are in the wrong blog.
Finch promotes a healthy separation between HTTP operations and the implementation of your services. Finch’s aim is to manage all the IO in HTTP via a very thin layer that you create, a set of request endpoints that will cover all the HTTP IO in your server. It being a simple and composable layer also means that it will be easy to modify as your API evolves or, if it comes to that, it will be easily replaced by another library or framework.
How does Finch work? Let’s see a very simple example:
// string below (lowercase) matches a String in the URL val echoEndpoint: Endpoint[String] = get("echo" :: string) { (phrase: String) => Ok(phrase) } val echoApi: Service[Request, Response] = echoEndpoint.toServiceAs[Text.Plain]
The example above defines an
echoEndpoint that returns whatever the user sends back to them, as a text response. The
echoEndpoint defines a single
get endpoint, as we can see from the implementation, and will be accessed via the path
/echo/<text>. We also define our api at
echoAPI as a set of endpoints, although in this particular case we have a single endpoint.
Even if this is a simplistic example you can see there is little overhead when defining endpoints. You can easily call one of your services within the endpoint, keeping both layers cleanly separated.
A World of Endpoints
Finch’s core structure is a set of endpoint you use to define your HTTP APIs, aiming to facilitate your development. How does it achieve that? If you think about all the web apps you have built, there are a few things that you commonly do again and again. Finch tries to alleviate these instances of repetition or boilerplate.
Composable Endpoints
Let’s start with the fact that Finch is a set composable endpoints. Let’s assume you are creating a standard Todo list application using a REST API. As you proceed with the implementation you may produce the following list of URI in it:
GET /todo/<id>/task/<id> PATCH /todo/<id>/task/<id> DELETE /todo/<id>/task/<id>
As you can see, we have a lot of repetition. If we decided to modify those endpoints in the future there’s a lot of room for manual error.
Finch solves that via the aforementioned composable endpoints. This means we can define a generic endpoint that matches the path we saw in the example above:
val taskEndpoint: Endpoint[Int :: Int :: HNil] = "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int]
The endpoint
taskEndpoint will match the pattern we saw defined previously and will extract both
id as integers. Now we can use it as a building block for other endpoints. See the next example:
final case class Task(id: Int, entries: List[Todo]) final case class Todo(id: Int, what: String) val getTask: Endpoint[Task] = get(taskEndpoint) { (todoId: Int, taskId: Int) => println(s"Got Task: $todoId/$taskId") ??? } val deleteTask: Endpoint[Task] = delete(taskEndpoint) { (todoId: Int, taskId: Int) => ??? }
We have defined both a
get and a
delete endpoint, both reusing the previously defined
taskEndpoint that matches our desired path. If down the road we need to alter our paths we only have to change one entry in our codebase, the modification will propagate to all the relevant entry points. You can obviously do much more with endpoint composition, but this example gives you a glimpse of what you can achieve.
Typesafe Endpoints
Reducing the amount of code to be modified is not the only advantage of composable endpoints. If you look again at the previously defined implementation:
val taskEndpoint: Endpoint[Int :: Int :: HNil] = "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int] val deleteTask: Endpoint[Task] = delete(taskEndpoint) { (todoId: Int, taskId: Int) => ??? }
We see that the
deleteTask endpoint maps over two parameters,
todoId and
taskId, which are extracted from the definition of
taskEndpoint. Suppose we were to modify the endpoint to cover a new scenario, like adding API versioning to the path:
val taskEndpoint: Endpoint[Int :: Int :: Int :: HNil] = "v" :: param("version").as[Int] :: "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int]
We can see that the type of the endpoint has changed from
Endpoint[Int :: Int :: HNil] to
Endpoint[Int :: Int :: Int :: HNil]: an additional
Int in the
HList. As a consequence, all the endpoints that compose over
taskEndpoints will now fail to compile as they are currently not taking care of the new parameter. We will need to update them as required for the service to run.
This is a very small example, but we already see great benefits. Endpoints are strongly typed, and if you are reading this you probably understand the benefits of strong types and how many errors they prevent. In Finch this means that a change to an endpoint will be enforced by the compiler onto any composition that uses that endpoint, making any refactor safer and ensuring the coherence of the implementation.
Testable Endpoints
The previous section considered the type-safety of endpoints. Unfortunately this only covers the server side of our endpoints. We still need to make sure they are defined consistently with the expectations of clients.
Typical ways to ensure this include defining a set of calls your service must process correctly and to process them as part of your CI/CD step. But running these tests can be both cumbersome to set up, due to the need to launch in-memory servers to execute the full service, as well as slow because you may need to launch the full stack of your application.
Fortunately, Finch’s approach to endpoints provides the means to verify that your service follows the agreed protocol. Endpoints are functions that receive an HTTP request and return a response. As such, you can call an individual endpoint with a customised request and ensure it returns the expected result.
Let’s see an endpoint test taken from the documentation:
// int below (lowercase) matches an integer in the URL val divOrFail: Endpoint[Int] = post(int :: int) { (a: Int, b: Int) => if (b == 0) BadRequest(new Exception("div by 0")) else Ok(a / b) } divOrFail(Input.post("/20/10")).value == Some(2) divOrFail(Input.get("/20/10")).value == None divOrFail(Input.post("/20/0")).output.map(_.status) == Some(Status.BadRequest)
We test the
divOrFail endpoint by passing different
Input objects that simulate a request. We see how a
get request fails to match the endpoint and returns
None while both
post requests behave as expected.
Obviously more complex endpoints may require you to set up some stubs to simulate calls to services, but you can see how Finch provides an easy and fast way to ensure you don’t break an expected protocol when changing your endpoints.
JSON Endpoints
Nowadays, JSON is the lingua franca of REST endpoints. When processing a POST request one of the first tasks is to decode the body from JSON to a set of classes from your model. When sending the response, if you are sending back data, the last step is to encode that fragment of your model as a JSON object. JSON support is essential.
Finch excels in this department by providing support for multiple libraries like Jackson, Argonaut, or Circe. Their JSON documentation gives more details on what they support.
By using libraries like Circe you can delegate all the serialisation to be automatically managed by Finch, with no boilerplate required. For example, look at the following snippet taken from one of Finch examples:
import io.finch.circe.jacksonSerializer._ import io.circe.generic.auto._ case class Todo(id: UUID, title: String, completed: Boolean, order: Int) def getTodos: Endpoint[List[Todo]] = get("todos") { val list = ... // get a list of Todo objects Ok(list) }
If you look at the
getTodos endpoint, it states its return type is
List[Todo]. The body obtains such a list and returns it as the response. There is no code to convert that list to the corresponding JSON object that will be sent through the wire, all this is managed for you via the two imports defined at the top of the snippet. Circe automatically creates an encoder (and a decoder) for the
Todo case class, and that it used by Finch to manage the serialisation.
Using Circe has an additional benefit for a common scenario. When you receive a POST request to create a new object, usually the data received doesn’t include the ID to assign to the object; you create this while saving the values. A standard pattern on these cases is to define your model with an optional
id field, like follows:
case class Todo(id: Option[UUID], title: String, completed: Boolean, order: Int)
With Finch and Circe you can standardise the treatment of these scenarios via partial JSON matches, which allow you to deserialise JSON objects with missing fields into a partial function that will return the object when executed. See the following snippet:
def postedTodo: Endpoint[Todo] = jsonBody[UUID => Todo].map(_(UUID.randomUUID())) def postTodo: Endpoint[Todo] = post("todos" :: postedTodo) { t: Todo => todos.incr() Todo.save(t) Created(t) }
In it the endpoint
postedTodo is matching the
jsonBody received as a function
UUID => Todo. This will match any JSON object that defines a
Todo object but is missing the
id. The endpoint itself maps over the result to call the function with a random UUID, effectively assigning a new
id to the object and returning a complete
Todo object to work with.
Although this looks like nothing more than convenient boilerplate, don’t dismiss the relevance of these partial deserialisers. The fact that your endpoint is giving you a full object, complete with a proper
id, removes a lot of scenarios where you would need to be aware of the possible lack of
id or use
copy calls to create new instances. You work with a full and valid model from the moment you process the data in the endpoint, and this reduces the possibility of errors.
Metrics, Metrics, Metrics
The current trends in software architecture towards microservices and canary releases mean knowing what is going on in your application matters more than ever. Logging, although still important, is no longer enough. Unfortunately many frameworks and libraries assume you will use a third party tool, like Kamon or New Relic, to manage your metrics. Which, in a context of microservices, can get expensive quite fast.
Although plain Finch doesn’t include any monitoring by itself, the best practices recommend using Twitter Server when creating a service with Finch. TwitterServer provides extras tooling, including a comprehensive set of metrics along a complete admin interface for your server.
Having a set of relevant metrics by default means you start your service using best practices, instead of trying to retrofit measurements once you realise they are needed. These metrics can also be retrieved via JSON endpoints, which allows you to integrate them with your standard monitoring tools for alerting.
Performance
Performance is always a tricky subject, as benchmarks can be misleading and, if we are honest, for most of the applications we implement the performance of our HTTP library is not the bottleneck.
That said, Finch is built on top of Finagle, a very performant RPC system built by Twitter. Finch developers claim that “Finch performs on 85% of Finagle’s throughput”. Their tests show that using Finch along Circe the server can manage 27,126 requests per second on their test hardware. More detailed benchmarks show that Finch is one of the fastest Scala libraries for HTTP.
So there you have it. Finch is not only easy to use, but it also provides more than decent performance, so you don’t have to sacrifice its ease of use even on your most demanding projects.
Good Documentation
You may be convinced at this point to use Finch, but with every new library your learn there come a crucial question: how well documented is it? It’s an unfortunate truth that open source projects often lack good documentation, a fact which increases the complexity of the learning curve.
Luckily for us Finch provides decent documentation for the users, including sections like best practices and a cookbook.
In fact, all of the examples in this post are taken from Finch’s documentation. I can say the documentation provided is enough to get you set up and running, and to start with your first services. For more advanced scenarios you may want to check the source code itself, which is well structured and legible.
Caveats
No tool is perfect, as the worn out “there is no silver bullet” adage reminds us. Finch, albeit quite impressive, has some caveats you need to be aware of before choosing it as your library.
The first and more important one is the lack of Websocket support. Although Finch has a SSE module, it lacks a full Websocket library. On many applications this is not an issue and you can work around it. But if you do need Websockets, you need to look elsewhere.
Related to the above limitation is the fact that Finch is still at version 0.11. Granted, nowadays software in pre-1.0 version can be (and is) stable and usable in production. And Finch is used in production successfully in many places, as stated by their README document. Finch is quite complete and covers the most common needs, but the library is growing and it may lack support for some things you may want. Like the aforementioned Websockets. Before choosing Finch make sure it provides everything you need.
The last caveat is the backbone of Finch, Finagle. Finagle has been developed by Twitter, and although stable and with a strong open source community, Twitter remains the main interested party using it.
In Conclusion
Finch is a good library for creating HTTP services, more so if you are keen on functional programming and interested on building pure services with best practices. It benefits from a simple but powerful abstraction (endpoints), removal of boilerplate by leveraging libraries like Circe, and great tooling (Twitter Server).
There are some caveats to be aware of, but we recommend you to build some small service with it. We are confident you will enjoy the experience. | https://underscore.io/blog/posts/2017/01/24/finch-functional-web-development.html | CC-MAIN-2021-49 | refinedweb | 2,655 | 60.85 |
"That's cool. But it would be even better if..."
By Geertjan on Apr 05, 2012
I recently talked to some NetBeans users who were interested in a demonstration of the features that will be part of NetBeans IDE 7.2. (See the 7.2 New and Noteworthy for the full list.)
One of the new features I demonstrated was this one. In an interface declaration, NetBeans IDE 7.2 will provide a hint, as can be seen in the sidebar below:
When the lightbulb is clicked, or Alt-Enter is pressed, this will be shown:
When the hint is invoked, the user will see this:
And then the user will be able to enter the name of a class, and the name of a package, and assuming the defaults above are taken, a class with this content will be generated:
package demo; public class WordProcessorImpl implements WordProcessor { @Override public String process(String word) { throw new UnsupportedOperationException("Not supported yet."); } }
When I demonstrated the above, the response from the audience was: "That's cool. But it would be even better if..."
- it was possible to implement an interface into an existing class.
- it was possible to select a class and specify the interfaces that it should implement.
- it was possible, in the context of a NetBeans Platform application, to specify the module where the class should be implemented.
So I created some issues:
- Implement an interface into an existing class
- Select class and specify interfaces to implement
- Allow user to select module for generating implementation | https://blogs.oracle.com/geertjan/entry/that_s_cool_but_it | CC-MAIN-2014-15 | refinedweb | 255 | 60.65 |
CodePlexProject Hosting for Open Source Software
First, I apologize - I'm very new to using Visual Studio in general. I have been writing Python scripts using Eclipse/pydev but I wish to move to Visual Studio. I have VS 2010 and have installed PyTools. It appears to be working now that
I got an interpreter working.
I am writing Python scripts for ArcGIS. What I need to be able to do is debug the Python scripts as they run within ArcCatalog or ArcMap.
I tried starting ArcCatalog and choosing Debug-> Attach, then choosing ArcCatalog. VS goes into debug mode at this point and I have breakpoints defined. When I run the script from ArcCatalog, despite the presence of breakpoints in VS, the
script executes to completion without ever hitting my breakpoints.
How do I configure VS and PyTools to allow debugging scripts like this?
Thanks for your help in advance!
Are you certain the script is running in that specific process? Could you do something like:
import os
print os.getpid()
To make sure the process is right?
The other possibility that springs to mind is if we're having trouble resolving the filename on disk to the code object loaded in the process. You might be able to do something like:
while True:
pass
And then hit the pause button in VS to break in. Because the process will be constantly executing Python code we should have no problems breaking in. Then in the call stack window you should see a stack trace. It'd be good to know whether
or not you can double click on the frames and have the files open. If they don't open for some reason then there's a mismatch that we can dig into more.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://pytools.codeplex.com/discussions/350689 | CC-MAIN-2016-50 | refinedweb | 330 | 83.15 |
4 Ways to Traverse a Map
Usually I don't need to traverse a
java.util.Map, which is meant to be looked up, not iterated through. I traverse a
java.util.Map primarily for debugging purposem, to dump its content. In case you do need to, here are 4 ways I've tried, just as examples:
import static java.net.HttpURLConnection.*;You may have noticed I'm using primitive int numbers (200, 403, 404) as the map key. Thanks to auto-boxing in JDK 5, I can use primitive s and their wrapper types interchangeably. Another useful feature in JDK 5 used here is using static import to import all HTTP status constants from
import java.util.*;
public class MapTest {
public static void main(String... args) {
traverseMap();
}
private static void traverseMap() {
Map<Integer, String> data = new HashMap<Integer, String>();
data.put(HTTP_OK, "HTTP_OK");
data.put(HTTP_FORBIDDEN, "HTTP_FORBIDDEN");
data.put(HTTP_NOT_FOUND, "HTTP_NOT_FOUND");
System.out.printf("%nUsing JDK 5 foreach and entry set:%n");
Set<Map.Entry<Integer, String>> entries = data.entrySet();
for(Map.Entry<Integer, String> entry : entries) {
Integer key = entry.getKey();
String value = entry.getValue();
System.out.printf("%s = %s%n", key, value);
}
System.out.printf("%nUsing Iterator<Map.Entry> and entry set:%n");
for(Iterator<Map.Entry<Integer, String>> it = entries.iterator(); it.hasNext();) {
Map.Entry<Integer, String> entry = it.next();
Integer key = entry.getKey();
String value = entry.getValue();
System.out.printf("%s = %s%n", key, value);
}
System.out.printf("%nUsing JDK 5 foreach and key set:%n");
for(Integer key : data.keySet()) {
String value = data.get(key);
System.out.printf("%s = %s%n", key, value);
}
System.out.printf("%nUsing traditional Iterator and key set%n");
for(Iterator<Integer> it = data.keySet().iterator(); it.hasNext();) {
Integer key = it.next();
String value = data.get(key);
System.out.printf("%s = %s%n", key, value);
}
}
}
java.net.HttpURLConnection.
To compile and run the test:
There is some discussion on entrySet vs keySet in the comment section of a previous post: Unnecessary Cast in Map. Basically, using entry set is faster than using key set in traversing a Map. I also learned to use entrySet from there.There is some discussion on entrySet vs keySet in the comment section of a previous post: Unnecessary Cast in Map. Basically, using entry set is faster than using key set in traversing a Map. I also learned to use entrySet from there.
$ javac MapTest.java
$ java MapTest
Using JDK 5 foreach and entry set:
200 = HTTP_OK
403 = HTTP_FORBIDDEN
404 = HTTP_NOT_FOUND
Using Iterator<Map.Entry> and entry set:
200 = HTTP_OK
403 = HTTP_FORBIDDEN
404 = HTTP_NOT_FOUND
Using JDK 5 foreach and key set:
200 = HTTP_OK
403 = HTTP_FORBIDDEN
404 = HTTP_NOT_FOUND
Using traditional Iterator and key set
200 = HTTP_OK
403 = HTTP_FORBIDDEN
404 = HTTP_NOT_FOUND
14 comments:
nice tutorial .. helped me alot
this does not compile (in JDK 1.7)... a few errors. If you could look into it I'd love it.
Updated code sample to include the complete test class and output, to use generics collection, and to use printf instead of println for better formatting.
Sorry I pasted the wrote code fix... but looks like you fixed it anyway. I meant the map.entry needed to be capitalized into Map.Entry. You can remove that bad comment from me.
I'll sign this one this time with my real name!
Thanks for spotting this. I guess what happened was, I forgot to escape < and > around Map.Entry, and blogger treated them as an html element and made them all lowercase.
Awesome one ! blog. java training in chennai
Hey, would you mind if I share your blog with my twitter group? There’s a lot of folks that I think would enjoy your content. Please let me know. Thank you.
Java Training in Chennai | J2EE Training in Chennai | Advanced Java Training in Chennai | Core Java Training in Chennai | Java Training institute in Chennai
With exceedingly prepared experts, useful learning, and of course, our Python training, you will almost certainly break prospective employee meetings and achieve a great deal as a designer.
For More Info:- Python Training in Gurgaon
Here we will use new foreach loop introduced in JDK5 for iterating over any map in java and using KeySet of the map for getting keys. this will iterate through all values of Map and display key and value together. That’s all on multiple ways of looping Map in Java. We have seen exactly 4 examples to iterator on Java Map in a combination of KeySet and EntrySet by using for loop and Iterator. Let me know if you are familiar with any other ways of iterating and getting each key value from Map in Java..
Cool blog with interesting contents..Really nice..
angularjs Training in Chennai
angularjs Training in Chennai BITA Academy
node.js Training in Chennai BITA Academy
node.js Training in Chennai
Data Science Course Content
Data Science Course in chennai quora
Data Science Course fees in chennai
Blockchain Training in chennai
Blockchain Classroom Training in chennai
Blockchain Training Institute in chennai
Linux Training in chennai
TestComplete Training in chennai
Thanks for sharing such a helpful, and understandable blog. I really enjoyed reading it.
Robots for kids
Robotic Online Classes
Robotics School Projects
Programming Courses Malaysia
Coding courses
Coding Academy
coding robots for kids
Coding classes for kids
Coding For Kids | https://javahowto.blogspot.com/2006/06/4-ways-to-traverse-map.html?showComment=1329165591192 | CC-MAIN-2021-43 | refinedweb | 883 | 59.7 |
32. Variable arguments
<stdarg.h>
Synopsis
#include <stdarg.h> void va_copy(va_list dest, va_list src);
Description.
Returns
The
va_copy macro returns no value.
32.1.3. The
va_end macro
Synopsis
#include <stdarg.h> void va_end(va_list ap);
Description). If there is no corresponding invocation of the
va_start or
va_copy macro or if the
va_end macro is not invoked before the
return, the behavior is undefined.
Returns
The
va_end macro returns no value.
32.1.4. The
va_start macro The function
f3 is similar, but saves the status of the variable argument list after the
indicated number of arguments; after
f2 has been called once with the whole list, the trailing part of the list
is gathered again and passed to function
f4.
#include <stdarg.h> #define MAXARGS 31 void f3(int n_ptrs, int f4_after, ...) { va_list ap, ap_save; char *array[MAXARGS]; int ptr_no = 0; if (n_ptrs > MAXARGS) n_ptrs = MAXARGS; va_start(ap, f4_after); while (ptr_no < n_ptrs) { array[ptr_no++] = va_arg(ap, char *); if (ptr_no == f4_after) va_copy(ap_save, ap); } va_end(ap); f2(n_ptrs, array); // Now process the saved copy. n_ptrs -= f4_after; ptr_no = 0; while (ptr_no < n_ptrs) array[ptr_no++] = va_arg(ap_save, char *); va_end(ap_save); f4(n_ptrs, array); } | https://www.ashtavakra.org/c-programming/stdarg/ | CC-MAIN-2022-05 | refinedweb | 192 | 65.52 |
Opened 9 years ago
Closed 9 years ago
Last modified 8 years ago
#1068 closed defect (invalid)
Silly bug in ForiegnKey
Description
This works when it probably shouldn't
from django.models import places ... class MyModel(meta.Model): place = meta.ForeignKey(places.Place)
If someone accidentally uses the app module's Place class in a foreign key, everything seems to work except the place object will not get the get_mymodel_... methods. It should throw an error if that occurs otherwise it's not obvious that something is wrong.
Change History (2)
comment:1 Changed 9 years ago by adrian
comment:2 Changed 9 years ago by mir@…
- Resolution set to invalid
- Status changed from new to closed
This is obviously invalid and looks stone dead.
Note: See TracTickets for help on using tickets.
What you described is the standard way of designating ForeignKeys, so I'm not sure what the problem is...Could you explain things more in depth?
I suspect you're not getting the "get_mymodel" methods, because the places app isn't in your INSTALLED_APPS. | https://code.djangoproject.com/ticket/1068 | CC-MAIN-2015-11 | refinedweb | 177 | 63.49 |
Python 3.8 is the latest version of the popular language for everything from scripting and automation to machine learning and web development. Now available in an official beta release, Python 3.8 brings a number of slick syntax changes, memory sharing, more efficient serialization and deserialization, revamped dictionaries, and much more.
Naturally, Python 3.8 ushers in all manner of performance improvements as well. The overall result is a faster, more concise, more consistent, and more modern Python. Here’s what’s new and most significant in Python 3.8.
Assignment expressions
The single most visible change in Python 3.8 is assignment expressions, which use what is known as the walrus operator (
:= ). Assignment expressions allow a value to be assigned to a variable, even a variable that doesn’t exist yet, in the context of an expression rather than as a stand-alone statement.
while (line := file.readline()) != "end": print(chunk)
In this example the variable
line is created if it doesn’t exist, then assigned the value from
file.readline(). Then
line is checked to see if it equates to
"end". If not, the next line is read, stored in
line, tested, and so on.
Assignment expressions follow the tradition of comprehensible terseness in Python that includes list comprehensions. Here, the idea is to cut down on some of the tedious boilerplate that tends to appear in certain Python programming patterns. The above snippet, for instance, would normally take far more than two lines of code to express.
Positional-only parameters
A new syntax for function definitions, positional-only parameters, lets developers force certain arguments to be positional only. This removes any ambiguity about which arguments in a function definition are positional and which are keyword arguments.
Positional-only parameters make it possible to define scenarios where, for instance, a function accepts any keyword argument but can also accept one or more positionals. This is often the case with Python built-ins, so giving Python developers a way to do this themselves reinforces consistency in the language.
An example from Python’s documentation:
def pow(x, y, z=None, /): r = x**y if z is not None: r %= z return r
The
/ separates positional from keyword arguments; in this example, all of the arguments are positional. In previous versions of Python,
z would be considered a keyword argument. Given the above function definition,
pow(2, 10) and
pow(2, 10, 5) are valid calls, but
pow(2, 10, z=5) is not.
F-string debugging support
The f-string format provides a convenient (and more performant) way to print text and computed values or variables in the same expression:
x = 3
print(f'{x+1}')
This would yield
4.
Adding an
= to the end of an f-string expression prints the text of the f-string expression itself, followed by the value:
x = 3
print (f'{x+1=}')
This would yield
x+1=4.
Multiprocessing shared memory
With Python 3.8, the
multiprocessing module now offers a
SharedMemory class that allows regions of memory to be created and shared between different Python processes.
In previous versions of Python, data could be shared between processes only by writing it out to a file, sending it over a network socket, or serializing it using Python’s
pickle module. Shared memory provides a much faster path for passing data between processes, allowing Python to more efficiently use multiple processors and processor cores.
Shared memory segments can be allocated as raw regions of bytes, or they can use immutable list-like objects that store a small subset of Python objects—numeric types, strings, byte objects, and the
None object.
Typing module improvements
Python is dynamically typed, but supports the use of type hints via the
typing module to allow third-party tools to verify Python programs. Python 3.8 adds new elements to
typing to make more robust checks possible:
- The
finaldecorator and
Finaltype annotation indicate that the decorated/annotated objects should not be overridden, subclassed, or reassigned at any point.
- The
Literaltype restricts expressions to a specific value or list of values, not necessarily of the same type.
- The
TypedDicttype lets you create dictionaries where the values associated with certain keys are restricted to one or more specific types. Note that these restrictions are limited to what can be determined at compile time, not at run time.
New version of the
pickle protocol
Python’s
pickle module provides a way to serialize and deserialize Python data structures—for instance, to allow a dictionary to be saved as-is to a file and reloaded later. Different versions of Python support different levels of the
pickle protocol, with more recent versions supporting a broader range of capabilities and more efficient serialization.
Version 5 of
pickle, introduced with Python 3.8, provides a new way to pickle objects that implement Python’s buffer protocol, such as bytes, memoryviews, or NumPy arrays. The new
pickle cuts down on the number of memory copies that have to be made for such objects.
External libraries like NumPy and Apache Arrow support the new
pickle protocol in their Python bindings. The new
pickle is also available as an add-on for Python 3.6 and Python 3.7 from PyPI.
Reversible dictionaries
Dictionaries in Python were totally rewritten in Python 3.6, using a new implementation contributed by the PyPy project. In addition to being faster and more compact, dictionaries now have inherent ordering for their elements; they’re ordered as they are added, much as with lists. Python 3.8 allows
reversed() to be used on dictionaries.
Runtime audit hooks
Runtime audit hooks, as described in PEP 578, allows functions performed by the CPython runtime to be visible to Python code by way of instrumentation functions. This way, application performance monitoring tools can see far more of what's happening inside Python applications as they run, on the function-call level.
Initialization configuration
PEP 587 allows the CPython runtime fine-grained control over its startup options by way of C APIs. This way, Python's startup behavior can be customized far more precisely than before -- for instance, if you're using Python in an embedded way, or as a standalone runtime for an app distribution.
Performance improvements
- Many built-in methods and functions have been sped up by 20% to 50%, as many of them were unnecessarily converting arguments passed to them.
- A new opcode cache can speed up certain instructions in the interpreter. However, the only currently implemented speed-up is for the
LOAD_GLOBALopcode, now 40% faster. Similar optimizations are planned for later versions of Python.
- File copying operations, such as
shutil.copyfile()and
shutil.copytree(), now use platform-specific calls and other optimizations to speed up operations.
- Newly created lists are now, on average, 12% smaller than before, thanks to optimizations that make use of the length of the list constructor object if it is known beforehand.
- Writes to class variables on new-style classes (e.g.,
class A(object)) are much faster in Python 3.8.
operator.itemgetter()and
collections.namedtuple()also have new speed optimizations.
Python C API and CPython improvements
In recent versions of Python, major work has gone into refactoring the C API used in CPython, the reference implementation of Python written in C. So far that work has yielded only incremental changes, but they’re adding up:
- A new C API for Python Initialization Configuration allows tighter control and more detailed feedback for Python’s initialization routines. This makes it easier to embed a Python runtime into an application, and to pass startup arguments to Python programmatically. This new API is also meant to ensure that all of Python’s configuration controls have a single, consistent home, so that future changes (like Python’s new UTF-8 mode) are easier to slot in.
- Another new C API for CPython, the “vectorcall” calling protocol, allows for far faster calls to internal Python methods without the overhead of creating temporary objects to handle the call. The API is still unstable, but has been made provisionally available. The plan is to finalize it as of Python 3.9.
- Python runtime audit hooks provide two APIs in the Python runtime for hooking events and making them observable to outside tools like testing frameworks or logging and auditing systems.
Where to download Python 3.8
You can download the Python 3.8 beta from the Python Software Foundation. | https://www.infoworld.com/article/3400640/the-best-new-features-in-python-38.html?upd=1562075267640 | CC-MAIN-2020-29 | refinedweb | 1,404 | 53.31 |
Hey, guys! I need little help for finished my code. I have created the code for analize my text file. I have text file with 2 lines. The program counts the lines of my txt file, length of each line, upper and lower case letters. Code works perfect without errors.
The problem is I've confused creating the functions which count:
1. Spaces in my text file
2. Punctuations such as . , "
For spaces I put % symbol in my text file. I think it allows easier to find spaces and count them.
What is my idea? I think, I should find total length of text file or each line of text file( which I have it) and then create true/false function for example: if ( lenght != letters) then space counter increament. Or another way: if I have total length of text, call true/ false function that if lenght not included Upper and Lower letters and than counter increament.
I don't know, but I still playing with code without result. If you have any idea or can help me to find decision, I'll appreciate!
I put some questions in my code. Thank you!
// reading a text file #include <iostream> #include <fstream> #include <string> using namespace std; const char FileName[] = "c:\\myFile.txt"; int main (){ ifstream inMyStream (FileName); string lineBuffer; if (inMyStream.is_open()){ //create an array to hold the letter counts int upperCaseCount[26] = {0}; //counter of Up letters int lowerCaseCount[26] = {0}; //counter of Low letters int lineCount = 1; //counter of text lines in the file int wordCount = 1; //counter of spaces int length = 0; //counter of length int letters[128] = {0}; //counter of total letters in the file int punMarks = 1; //counter of punctuations(,.") while (!inMyStream.eof()){ //get a line of text getline (inMyStream, lineBuffer); int count = 1; count = lineCount; lineCount ++; cout << "\nText File line #: " << count << endl; length = lineBuffer.length(); cout << "Length of the line : " << length << endl; / lowerCaseCount[int(oneLetter)- 97]++; //make the index match the count array } }//end for //Question: I have created this function to count total characters in the buffer //I think, I can connect it to my idea for create for after while. char totLetters; for ( int i = 0; i < lineBuffer.length(); ++i ){ totLetters = char( lineBuffer[i] ); if ( totLetters == lineBuffer.length()){ letters[int(totLetters) - 128]++; } } }//end while inMyStream.close(); cout << "\nMy file containts the letters: " << endl; cout << "\nUpperCase | Quantity: " << endl; for (int i = 0; i < 26; i++ ) if( upperCaseCount[i] > 0) cout << " " << char(i + 65) << "\t | \t" << upperCaseCount[i] << endl; cout << "\nLowerCase | Quantity: " << endl; for (int i = 0; i < 26; i++ ) if(lowerCaseCount[i] > 0) cout << " " << char(i + 97) << "\t | \t" << lowerCaseCount[i] << endl; // I just leave this functions for review // This function should print spaces cout << "Text file has spaces: " << wordCount << endl; // Should print punctuations and use punMarks counter cout << "Text file has punctuations: " << "," << " and "<< "." << endl; cout << endl; } else cout << "File Error: Open Failed"; return 0; } | https://www.daniweb.com/programming/software-development/threads/213423/help-wanted-in-c-code-analyzes-the-contents-of-a-text-file | CC-MAIN-2018-43 | refinedweb | 482 | 71.24 |
We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT
Every process uses minidom(xml files are < 500kb, simple structure) and urllib.
Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?
Take a look at subclassing the AutoScaler class and setting the min_concurrency variable in __init__. The default min_concurrency of 0 is preventing the default AutoScaler to scale down.
AutoScaler
min_concurrency
__init__
I haven't tested this class (my Celery test nodes are shut down) but something like the following should work:
from celery.worker.autoscale import Autoscaler
class MinIdleAutoscaler(Autoscaler):
def __init__(self,pool, max_concurrency, min_concurrency=10, keepalive=30, logger=None):
Autoscaler.__init__(self,pool,max_concurrency,min_concurrency,keepalive,logger)
You can then tell Celery to use this class by setting CELERYD_AUTOSCALER in your Celery config.
CELERYD_AUTOSCAL
11 months ago | http://serverfault.com/questions/208780/celery-minimize-memory-consuption | crawl-003 | refinedweb | 187 | 56.05 |
If you have a coding question, you can use stackoverflow with the tag 'errbot' and 'python'
Hi All
I am trying to schedule a cron job to post message on slack using my Err Bot . Has anyone used a scheduler/cron job for the same?
def activate(self): super().activate() self.start_poller(60, self.oncall(msg,args), times=1)
Above is my code snippet I am using to schedule my function oncall after every 60seconds.
@botcmd def oncall(self, msg, args):
Above is the function I am trying to call from my scheduler/cronjob
@achoudh5 yes I use the poller often. The mistake you've made is that you calling the on_call method in the start poller rather than just passing the method. For example it should be
def activate(self): super().activate() self.start_poller(60, self.oncall, 1, msg, args) def oncall(self, msg, args): print("in on call")
I also noticed that your activate method doesn't contain the variables
msg or
args so are you making an assumption that they are present? If they are not going to be present then your code should be:
def activate(self): super().activate() self.start_poller(60, self.oncall, 1) def oncall(self): print("in on call")
@nzlosh my config.py looks like this for ErrBot:-
import logging BACKEND = 'Slack' BOT_IDENTITY ={'token':'xoxb-<slack OAuth Bot token>'} BOT_DATA_DIR = r'<path>/ErrBot/errbot/data' BOT_EXTRA_PLUGIN_DIR = r'<path>/ErrBot/errbot/plugins' BOT_LOG_FILE = r'<path>/ErrBot/errbot/errbot.log' BOT_LOG_LEVEL = logging.DEBUG BOT_ADMINS = ("<slack_admin_name>")
Is there any more setting I need to do to make it work?
Slackbackend you must have a legacy token. To create one, follow these instructions.
pip install errbot[slack]which works, but when I start up I get log messages: ```
slack_backend
pip install errbot[slack-rtm]to install it.
To use the
Slackbackend you must have a legacy token. To create one, follow these instructions.
@nzlosh there seems to be upgradation, there is not bot scope I have seen and rest I followed similar process before as well :/
slack_rtmbackend but my understanding is you need to use slackclient v2 with it.
BACKEND = 'Slack'
I have a command with an argument. I'd like it to work with a default value if the argument is omitted.
@arg_botcmd("service_name", type=str, default="default_service", help="Service name") def xstatus(self, msg, service_name): ...
What's the right way to do it? I get an error when I try to run it w/o an argument:
User: !xstatus myservice Errbot: ok User: !xstatus Errbot: I couldn't parse the arguments; the following arguments are required: service_name
errbot --storage-get <plugin>gives me the right config dict back but it's not actually applied -- errbot complains that the plugin isn't configured when it boots. I don't think errbotio/errbot#910 should have been closed.
2020-12-04 20:12:40,809 DEBUG errbot.botplugin Previous timer found and removed
latesttag in docker hub:
docker pull errbotio/errbot:6.1.7
@re_botcmd(pattern=r"(?=.*re?)(?=.*cmd)(?=.*param=(?P<param>\w+))", flags=re.IGNORECASE) def my_re_cmd(self, msg, match): my_parm = match.groupdict().get("param", None) yield f"My re cmd with a param={my_param}" ... @cmdfilter def my_filter(self, msg, cmd, args, dry_run): if cmd == self.my_re_cmd.__name__: return msg, "my_func", None return None, None, None @re_botcmd(pattern=r"ah ah") def my_func(self, msg, match): print(match.groupdict()) # {'param' = 123} BUG??? yield "My func"
Syntax error in the given configurationwhen trying to configure my plugins through Slack. It seems to happen intermittently. Anyone know what might be the cause? I've documented a similar issue here before errbotio/errbot#1298 | https://gitter.im/errbotio/errbot | CC-MAIN-2021-04 | refinedweb | 605 | 58.08 |
#include <P_Feedback.h>
Inheritance diagram for Feedback:
Common notes for VMDTracker, Feedback, and Buttons classes, collectively referred to as devices in what follows.
Constructor: The constructor must not require any input arguments. The reason for this is that an instance of every class is created and held in an associative store so that it can be referenced by its device_name() string. This instantiation is done independently of any device configuration, such as what would be found in the .vmdsensors file. Constructors should thus do nothing but initialize member data to NULL or default values.
device_name(): This pure virtual function supplies a string by which the device can be accessed in an associative store (since classes aren't first-class objects in C++). The name must be unique to that class, among all devices of that type.
clone(): This should do nothing more that return an instance of the class.
do_start(const SensorConfig *): Here's where the action is: This method will be called from the base class start() method after general initialization is done. This method is where the subclass should, e.g., establish a connection to a remote device. If there is no class-specific initialization to do then the subclass need not override this method.
start() should be called only once in a device's life, when it is first added to a Tool.
Definition at line 65 of file P_Feedback.h. | http://www.ks.uiuc.edu/Research/vmd/doxygen/classFeedback.html | crawl-003 | refinedweb | 234 | 63.59 |
One of the great things about TensorFlow is its ability to handle multiple threads and therefore allow asynchronous operations. If we have large datasets this can significantly speed up the training process of our models. This functionality is especially handy when reading, pre-processing and extracting in mini-batches our training data. The secret to being able to do professional and high-performance training of our models is understanding TensorFlow queuing operations. The particular queuing operations/objects we will be looking at in this tutorial are FIFOQueue, RandomShuffleQueue, QueueRunner, Coordinator, string_input_producer and shuffle_batch, but the concepts that I will introduce are common to the multitude of queuing and threading operations available in TensorFlow.
If you’re a beginner to TensorFlow, I’d recommend first checking out some of my other TensorFlow tutorials Python TensorFlow Tutorial – Build a Neural Network and/or Convolutional Neural Networks Tutorial in TensorFlow. If you’re more of a video learning person, get up to speed with the online course below.
Recommended online course: If you want a video introduction to TensorFlow, I recommend the following inexpensive Udemy course: Data Science: Practical Deep Learning in Theano + TensorFlow
As usual, all the code for this post is on this site’s Github repository.
TensorFlow queuing and threads – introductory concepts
We know from our common day experience that certain tasks can be performed in parallel, and when we do such tasks in parallel we can get great reductions in the time it takes to complete complex tasks. The same is true in computing – often our CPU will get stuck waiting for the completion of a single task, such as waiting to read in data from a file or database, and it blocks any other tasks from occurring in the program. Needless to say, this impacts performance and doesn’t utilize our CPUs effectively.
These types of issues are tackled in computing by using threading. Threading involves multiple tasks running asynchronously – that is when one thread is blocked another thread gets to run. When we have multiple CPUs, we can also have multi-threading which allows different threads to run at the same time. Unfortunately, threading is notoriously difficult to manage, especially in Python. Thankfully, TensorFlow has come to the rescue and provided us means of including threading in our input data processing.
In fact, TensorFlow has released a performance guide which specifically recommends the use of threading when inputting data to our training processes. Their method of threading is called Queuing. Often when you read introductory tutorials on TensorFlow (mine included), you won’t hear about TensorFlow queuing. Instead, you’ll see the following feed_dict syntax as the method of feeding data into the training graph:
Here data is fed into the final training operation via the feed_dict argument. TensorFlow, in its performance guide, specifically discourages the use of the feed_dict method. It’s great for tutorials if you want to focus on core TensorFlow functionality, but not so good for overall performance. This tutorial will introduce you to the concept of TensorFlow queuing.
What are TensorFlow queues exactly? They are data storage objects which can be loaded and de-loaded with information asynchronously using threads. This allows us to stream data into our training algorithms more seamlessly, as loading and de-loading of data can be performed at the same time (or when one thread is blocking) – with our queue being “topped up” when required with new data to ensure a steady stream of data. This process will be shown more fully below, as I introduce different TensorFlow queuing concepts.
The first TensorFlow queue that I will introduce is the first-in, first-out queue called FIFOQueue.
The FIFOQueue – first in, first out
The illustration below, from the TensorFlow website, shows a FIFOQueue in action:
Here is what is happening in the gif above – first a FIFOQueue object is created with a capacity of 3 and a data type = “float”. An enqueue_many operation is then performed on the queue – this basically loads up the queue to capacity with the vector [0, 0, 0]. Next, the code creates a dequeue operation – where the first value to enter the queue is unloaded. The next operation simply adds 1 to the dequeued value. The last operation adds this incremented number back to the top of the FIFOQueue to “top it up” – making sure it doesn’t run out of values to dequeue. These operations are then run and you can see the result – a kind of slowly incrementing counter.
Let’s have another look at how this works by introducing some real TensorFlow code:
In this code example, I’ve created, I first create a random normal tensor, of size 3, and then I create a printing operation so we can see what values have been randomly selected. After that, I set up a FIFOQueue, with capacity = 3 as in the example above. I enqueue all three values of the random tensor in the enqueue_op. Then I immediately attempt to dequeue a value from q and assign it to data. Another print operation follows and then I create basically a fake graph, where I simply add 1 to the dequeued data variable. This step is required so TensorFlow knows that it needs to execute all the preceding operations which lead up to producing data. Next, we start up a session and run:
All that is performed in the code above is running the enqueue_many operation (enqueue_op) which loads up our queue to capacity, and then we run the fake graph operation, which involves emptying our queue of values, one at a time. After we’ve run this operation a few times the queue will be empty – if we try and run the operation again, the main thread of the program will hang or block – this is because it will be waiting for another operation to be run to put more values in the queue. As such, the final print statement is never run. The output looks like this:
Once the output gets to the point above you’ll actually have to terminate the program as it is blocked. Now, this isn’t very useful. What we really want to happen is for our little program to reload or enqueue more values whenever our queue is empty or is about to become empty. We could fix this by explicitly running our enqueue_op again in the code above to reload our queue with values. However, for large, more realistic programs, this will become unwieldy. Thankfully, TensorFlow has a solution.
QueueRunners and the Coordinator
The first object that TensorFlow has for us is the QueueRunner object. A QueueRunner will control the asynchronous execution of enqueue operations to ensure that our queues never run dry. Not only that, but it can create multiple threads of enqueue operations, all of which it will handle in an asynchronous fashion. This makes things easy for us. We have to add all our queue runners, after we’ve created them, to the GraphKeys collection called QUEUE_RUNNERS. This is a collection of all the queue runners, and adding our runners to this collection allows TensorFlow to include them when constructing its computational graph (for more information on computational graphs check out my TensorFlow tutorial). This is what the first half of our previous code example now looks like after incorporating these concepts:
The first change is to increase the size of dummy_input – more on this later. The most important change is the qr = tf.train.QueueRunner(q, [enqueue_op] * 1) operation. The first argument in this definition is the queue we want to run – in this case, it is the q assigned to the creation of our FIFOQueue object. The next argument is a list argument, and this specifies how many enqueue operation threads we want to create. In this case, my “* 1″ is not required, but it is meant to be illustrative to show that I am just creating a single enqueuing thread which will run asynchronously with the main thread of the program. If I wanted to create, say, 10 threads, this line would look like:
The next addition is the add_queue_runner operation which adds our queue runner (qr) to the QUEUE_RUNNERS collection.
At this point, you may think that we are all set – but not quite. Finally, we have to add a TensorFlow object called a Coordinator. A coordinator object helps to make sure that all the threads we create stop together – this is important at any point in our program where we want to bring all the multiple threads together and rejoin the main thread (usually at the end of the program). It is also important if an exception occurs on one of the threads – we want this exception broadcast to all of the threads so that they all stop. More on the Coordinator object can be found here – in our code, we will be implementing it rather naively. The session part of our example now looks like this:
The first two lines create a generic Coordinator object and the second starts our queue runners, specifying our coordinator object which will handle the stopping of the threads. We now can run sess.run(fg) as many times as we like, with the queue runners now ensuring that the FIFOQueue always has data in it when we need it – it will no longer hang or block. Finally, once we are done we ask the threads to stop operation (coord.request_stop()) and then we ask the coordinator to join the threads back into the main program thread (coord.join(threads)). The output looks like this:
The first thing to notice about the above is that the printing of outputs is all over the place i.e. not in a linear order. This is because of the asynchronous, nonlinear, running of the thread and enqueuing operations. The second thing to notice is that our dummy inputs are of size 5, while our queue only has a capacity of 3. In other words, when we run the enqueue_many operation we, in a sense, overflow the queue. You’d think that this would result in the overflowed values being discarded (or an exception being raised), but if you look at the flow of outputs carefully, you can see that these values are simply held in “stasis” until they have room to be loaded. This is a pretty robust way for TensorFlow to handle things.
Ok, so that’s a good introduction to the main concepts of queues and threading in TensorFlow. Now let’s look at using these objects in a more practical example.
A more practical example – reading the CIFAR-10 dataset
The CIFAR-10 dataset is a series of labeled images which contain objects such as cars, planes, cats, dogs etc. It is a frequently used benchmark for image classification tasks. It is a large dataset (166MB) and is a prime example of where a good data streaming queuing routine is needed for high performance. In the following example, I am going to show how to read in this data using a FIFOQueue and create data-batches using another queue object called a RandomShuffleQueue. To learn more about batching, have a look at my Stochastic Gradient Descent tutorial. Included in the code example is a number of steps required to process the images, but I am not going to concentrate on these steps in this tutorial – that’s fodder for a future post. Rather, I will focus on the queuing aspects. The code will include a number of steps:
- Create a list of filenames which hold the CIFAR-10 data
- Create a FIFOQueue to hold the randomly shuffled filenames, and associated enqueuing
- Dequeue files and extract image data
- Perform image processing
- Enqueue processed image data into a RandomShuffleQueue
- Dequeue data batches for classifier training (the classifier training won’t be covered in this tutorial – that’s for a future post)
This process will closely resemble the following gif, again from the TensorFlow site:
The main flow of the program looks like this:
I’ll go through each of the main queuing steps below.
The filename queue
First, after defining a few parameters, we create a filename list to pull in the 5 binary data files which comprise the CIFAR-10 data set. Then we run the cifar_filename_queue() function which I’ve created – it looks like this:
The first thing that is performed in the above function is to convert the filename_list to a tensor. Then we randomly shuffle the list and create a capacity = 10 FIFOQueue. We then enqueue fq with our tensor of randomly shuffled file names and add a queue runner. This is all pretty straightforward and produces a randomly shuffled queue of filenames to dequeue from. We only need one thread to perform this operation, as it is pretty simple. We return the filename queue, fq, from the function.
Next up in the main flow of our program is the read_data function.
The FixedLengthRecordReader
The read_data function takes the filename queue, dequeues file names and extracts the image and label data from the CIFAR-10 data set. Most of the function deals with preprocessing the image data, so we’ll skip over most of it (you can have a look at the code on Github if you like). However, there is a special TensorFlow object that we want to pay attention to:
The FixedLengthRecordReader is a TensorFlow reader which is especially useful for reading binary files, where each record or row is a fixed number of bytes. Previously in read_data the number of bytes per record or data file row is calculated and stored in record_bytes. Of particular note is that this reader also implicitly handles the dequeuing operation from file_q (our filename queue). So we don’t have to worry about explicitly dequeuing from our filename queue. The reader will also parse the files it dequeues and return the image data. The rest of the read_data function deals with shaping up the image and label data from the raw binary information. Note that the read_data function returns a single image and label record, of size (24, 24, 3) and (1), respectively. The image size, (24, 24, 3), represents a 24 x 24 pixel image, with an RGB depth of 3.
The next step in the main flow of the program is to setup the minimum number of examples in the upcoming RandomShuffleQueue.
The minimum number of examples in the RandomShuffleQueue
When we want to extract randomized batch data from a queue which is fed by a queue of filenames, we want to make sure that the data is truly randomized across the data set. To ensure this occurs, we want new data flowing into the randomized queue regularly. TensorFlow handles this by including an argument for the RandomShuffleQueue called min_after_dequeue. If, after a dequeuing operation, the number of examples or samples in the queue falls below this value it will block any further dequeuing until more samples are added to the queue. In other words, it will force an enqueuing operation. TensorFlow has some things to say about what our queue capacity and min_after_dequeue values should be to ensure good mixing when extracting random batch samples in their documentation. In our case, we will follow their recommendations:
The RandomShuffleQueue
We now want to setup our RandomShuffleQueue which enables us to extract randomized batch data which can then be fed into our convolutional neural network or some other training graph. The RandomShuffleQueue is similar to the FIFOQueue, in that it involves the same sort of enqueuing and dequeuing operations. The only real difference is that the RandomShuffleQueue dequeues elements in a random manner. This is obviously useful when we are training our neural networks using mini-batches. The implementation of this functionality is in my function cifar_shuffle_queue_batch, which I reproduce below:
We first create a variable called tensor_list which is simply a list of the image and label data – this will be the data which is enqueued to the RandomShuffleQueue. We then specify the data types and tensor sizes which match this data and is required as input to the RandomShuffleQueue definition. Because of the large volumes of data, we setup 16 threads for this queue. The enqueuing and adding to the QUEUE_RUNNERS collection operations are things we have seen before. In the final line of the function, we perform a dequeue_many operation and the number of examples we dequeue is equal to the batch size we desire for our training. Finally, the image batches and label batches are returned as a tuple.
All that is left now is to specify the session which runs our operations.
Running the operations
The final function I created in the main flow of the program is called cifar_run:
In this function, all I do is run the operations which were passed into this function – image and label. Remember that to execute these operations, the dequeue_many operation must be run for the RandomShuffleQueue along with all the preceding operations in the computational graph (i.e. pre-processing, file name queue etc.). Running these operations returns the actual batch data, and I then print the shape of these batches. I perform 5 batch extractions, but one could perform an indefinite number of these extractions, with the enqueuing and dequeuing all being taken care of via the queue runners. The output looks like this:
This output isn’t very interesting – but it shows you that the whole queuing process is working as it should – each time returning 128 examples (128 is our specified batch size) of image and label data. You can also look at each batch and find that the data is indeed randomized as we had hoped it would be. So there you have it, you now know how TensorFlow queuing and threads work.
In the above explanation, for illustrative purposes, I’ve actually shown you the long way of creating filename and random batch shuffle queues. TensorFlow has created a couple of helper functions which reduce the amount of code we need to implement these queues.
The string_input_producer and shuffle_batch
There are two queue helpers in TensorFlow which basically replicate the functionality of my custom functions which utilize FIFOQueue and RandomShuffleQueue. These functions are called string_input_producer which takes a list of filenames and creates a FIFOQueue with enqueuing implicitely provided, and shuffle_batch which creates a RandomShuffleQueue with enqueuing and batch-sized dequeuing already provided. In my main program (cifar_shuffle_batch) you can replace my cifar_filename_queue and cifar_shuffle_batch_queue functions with calls to string_input_producer and shuffle_batch respectively, like so:
and:
By running the script (at the Github repository here) with these replacements, you will get the same results as before.
We have now covered how TensorFlow queuing and threading works. I hope you now feel confident to implement these concepts in your TensorFlow programs, which will allow you to build high-performance TensorFlow training algorithms. As always, have fun.
Recommended online course: If you want learn more about TensorFlow and like video courses I recommend the following inexpensive Udemy course: Data Science: Practical Deep Learning in Theano + TensorFlow
Thanks for the tutorial. The API seems unnecessarily complex.
Basically, all we really need is an interface to get records by index.
public interface DataReader {
long getTotalItems();
T[] getItems(long start, int length);
}
The developer implements that to wrap the data they need to load.
Then it is loaded by a Utility class.
public class DataBatcher {
public DataBatcher(DataReader reader, int numThreads, int batchSize, boolean returnDataInRandomOrder) {}
public void start() {}
public void stop() {}
public T[] getBatch() {}
}
Hey,
Greap post!
FYI, I’ve followed one of the links you supplied:
and the first thing I saw was:
Note: In versions of TensorFlow before 1.2, we recommended using multi-threaded, queue-based input pipelines for performance. Beginning with TensorFlow 1.2, however, we recommend using the tf.contrib.data module instead. The tf.contrib.data module offers an easier-to-use interface for constructing efficient input pipelines. Furthermore, we’ve stopped developing the old multi-threaded, queue-based input pipelines. We’ve retained the documentation in this file to help developers who are still maintaining older code.
What’s your take on this?
Hi – I will be creating a new post to deal with the tf.contrib.data module sometime in the future. In the meantime, the threading and queues I have presented will give you good performance. Thanks for the comment | http://adventuresinmachinelearning.com/introduction-tensorflow-queuing/?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=DataScience%20Digest | CC-MAIN-2018-13 | refinedweb | 3,396 | 50.46 |
Insights from using Generics
Finally I’ve built code using Generics. Sure Generics has been around since .net 2.0. But my thing has been to code my commercial products. Peter’s Data Entry Suite had its origins in ASP.NET 1.0 and still can be compiled for ASP.NET 1.x customers. (There still are a few.) The new Versatile DataSources project allowed me to work with Generics and I had a blast. Yet I ran into some challenges that I’m sure everyone runs into. I figured I’d write about how I solved them.
Testing type compatibility (polymorphism)
The usual way to determine if an object is a specific type is through this syntax in C#:
if (objectinstance is type)
The “is” keyword does all of the work. Unfortunately this doesn’t work with generic types when you want to compare to the unbounded type. The unbounded type is the initial class declaration, like class MyClass<T>, where T has yet to be assigned. Its legal to test
if (myEntity is BaseEntity<Product>)
But if you want to test for BaseEntity<> without regard to Product, expect a compiler error:
if (myEntity is BaseEntity<>) // Illegal!
I found the solution here:. I used the answer posted on May 22.
It employs the Extension method concept, extending the System.Type class to offer HasGenericDefinition().
Use it like this:
Type objecttype = objectinstance.GetType();
if (objecttype.HasGenericDefinition(typeof(type<>)))
For example, to see if a MethodInfo object (describes a method using Reflection) has the return type of IEnumerable<>:
if (methodInfo.ReturnType.HasGenericDefinition(typeof(IEnumerable<>))
Also take from this example that the typeof() operator can be passed an unbounded type.
I would like to see something like this built into the .net framework and the C# language.
Typecasting to an unbounded type
Here’s some code that has a lot of problems.
public void TypeCasting(IEnumerable<> list) // illegal!
{
IEnumerable<> newlist = (IEnumerable<>) list.Clone();// illegal!
}
You cannot use unbounded types in all 3 locations shown above. If you want to share a type without regards to the bounded type, use a separate interface that does not use generics. The .net framework does this frequently: IEnumerable<> has IEnumerable; IQueryable<> has IQueryable.
Your own classes should define a non-generic interface too. For example:
public class MyGenericType<T> : IMyGenericType
{
}
The members of your generic type that need to be available to other consumers that are not interested in the bounded type should be declared in the interface.
For example:
public interface IMyGenericType
{
string Value { get; set; }
}
public class MyGenericType<T>:IMyGenericType
{
public string Value { get; set; }
}
Finally, the consumer method for MyGenericType would look like this:
public string UpdateText(IMyGenericType genType)
{
IMyGenericType newGenType = (IMyGenericType) genType.Clone(); // assumes Clone() is implemented
newGenType.Value = "Updated";
return newGenType.Value;
}
Instantiate objects through reflection
Let’s suppose you want to create an object of type List<T>, where T is determined at runtime. For example, the user passes in an object type to your function that creates the object.
public void CreateYourTypeAsList(Type myType)
{
}
It is illegal to pass a Type object to define the bounds on the generic type.
List<myType> list = new List<myType>(); // illegal!
You need to use Reflection to create your instance. Here’s the code:
public IList CreateYourTypeAsList(Type myType)
{
Type vGeneric = typeof(List<>);
Type vBounded = vGeneric.MakeGenericType(new Type[] { myType });
System.Reflection.ConstructorInfo vConstructor = vBounded.GetConstructor(new Type[] {});
return (IList) vConstructor.Invoke(new object[] { });
}
This is how it works.
- Use the MakeGenericType() method, on System.Type, to create a bounded type. The parameter list takes the elements that normally go in the <> symbol. For List<T>, there is one parameter. For Dictionary<Key,Value>, there are two parameters.
- Use Reflection to get the constructor you will use to create this object. The System.Type.GetConstructor() method takes a list of types that exactly match the types of the desired constructor. In the above example, there are no parameters, leaving the Type[] array empty.
- Invoke the constructor, passing in values for its parameters.
Personally, I find this approach excessive and hope the C# language team gives us a simpler approach. | http://weblogs.asp.net/peterblum/insights-from-using-generics | CC-MAIN-2014-35 | refinedweb | 686 | 51.14 |
How to create constants in ST2 (with SenchaArchitect)
How to create constants in ST2 (with SenchaArchitect)
I saw this post
which explains how to do it if you can put out your own javascript files in the directory specified, but I'm not sure how to do it in SA. I tried putting this in my app launch but of course it does not work because I've only defined my variable.
Code:
Ext.define('tweets.constants', { singleton : true, config : { baseUrl: '', }, constructor: function(config) { this.initConfig(config); this.callParent([config]); } });
Last edited by pkellner; 18 Jan 2013 at 6:43 PM. Reason: took out real urlPeter Kellner
ExtJS and Sencha Touch Training Videos
PeterKellner.Net
San Jose, CA
The first thing you have to do is to add a new path to the app loader config.
So, talking about best uses, I suggest you to put change the constants namespace to
Code:
tweets.util.contants
So, follow these steps:
Code:
- Select the "Application" node, search for the "loader" config and select it.
- Sencha Architect now have created a "loader" node for you, so select it, search for the path config and hit the edit button.
- Add the "twees.util" path (the path object should looks like follows)
{ "Ext":".", "tweets.util": "util" }
Sencha Inc
- Select the Application node and edit the "requires" config object adding your constants class (tweets.util.contastants).
- Create a folder named "util" under your project root directory and put the "constants.js" file inside it.
Andrea Cammarata, Solutions Engineer
Owner at SIMACS
@AndreaCammarata
github:
TUX components bundle for Sencha Touch 2.x.x | http://www.sencha.com/forum/showthread.php?254264 | CC-MAIN-2014-42 | refinedweb | 266 | 63.39 |
My fellow MVP Peter Nowak from Germany pointed me to this awesome kit on Amazon.de that has thisyou will need access to the “softPwm” routines of wiringPi. Browsing the C sources I found the softPwm.c file with the routines I needed to access. So I created my own wrapper class:
using System.Runtime.InteropServices; namespace WiringPi { public class SoftPwm { [DllImport("libwiringPi.so", EntryPoint = "softPwmCreate")] public static extern void Create(int pin, int initialValue, int pwmRange); [DllImport("libwiringPi.so", EntryPoint = "softPwmWrite")] public static extern void Write(int pin, int value); [DllImport("libwiringPi.so", EntryPoint = "softPwmStop")] public static extern void Stop(int pin); } }
As you see, this is all pretty ugly – everything is public static, it follows the layout of the original code very closely and is in its current for totally not object oriented. But whatever, it’s still early days and it’s a temporary measure until Windows 10 pops up anyway. What this does is make a few C routines accessible to C# – for instance, the C routine “softPwmWrite”, that puts a value on a GPIO pin, becomes available al SoftPwm.Write.
And then I can write the following simple program that lets the multi color LED that comes with the SunFounder kit it all kinds of colors:
using System; using System.Threading; using WiringPi; namespace SunfounderTest { class Program { const int LedPinRed = 0; const int LedPinGreen = 1; const int LedPinBlue = 2; static void Main(string[] args) { if (Init.WiringPiSetup() != -1) { SoftPwm.Create(LedPinRed, 0, 100); SoftPwm.Create(LedPinGreen, 0, 100); SoftPwm.Create(LedPinBlue, 0, 100); Console.WriteLine("Init succeeded"); for (var i = 1; i < 3; i++) { ShowColor(255, 0, 0, "Red"); ShowColor(0, 255, 0, "Green"); ShowColor(0, 0, 255, "Blue"); ShowColor(255, 255, 0, "Yellow"); ShowColor(255, 0, 255, "Pink"); ShowColor(0, 255, 255, "Cyan"); ShowColor(195, 0, 255, "Purple"); ShowColor(255, 255, 255, "White"); } SoftPwm.Stop(LedPinRed); SoftPwm.Stop(LedPinGreen); SoftPwm.Stop(LedPinBlue); } else { Console.WriteLine("Init failed"); } } private static void ShowColor(int r, int g, int b, string label) { SetLedColor(r, g, b); Console.WriteLine(label); Thread.Sleep(1000); } private static void SetLedColor(int r, int g, int b) { SoftPwm.Write(LedPinRed, r); SoftPwm.Write(LedPinGreen, g); SoftPwm.Write(LedPinBlue, b); } } }
I will not even begin with pretending that I actually fully understand what I am doing – and I even understand less of the why - but you first have to call the Init.WiringPiSetup method – that was already wrapped by Daniel – and thenyou need to call ‘Create’ on the three pins - only then you can actual set value to the pins using SoftPwm.Write. I wrapped the calls to that in SetLedColor that accepts red, green and blue values (apparenly 0-255), which is in turn wrapped in a routine that writes progress info and waits a little before progressing. And at the end, you will need to call Stop on all three pins to make the LED go off again, or else it will happily stay on.
Net effect: if you connect LED Pin R to GPIO17, Pin G to 18, Pin B to 27 and the ground pin to on of the GND, the led will flash trough a whole range of colors.
It will only work if you run the app using sudo, so
sudo mono SunfounderTest.exe
The annoying part is that in the samples provide with this kit these pins 17, 18 and 27 are referred to as 0, 1 and 2. This apparently has historical reasons. On this page at element14 I have found a pin schema with translation from one naming to the other. Notice Pin number 11 is called both GPIO17 as well as GPIO_GEN0. This correspond to Pin 1 in the code. I assume there is some logic somewhere, but I yet have to discover that :)
As is my custom, you can find a demo solution here. Mind you – not all of the code is mine, part of it is from Daniel Riches
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/controlling-led-raspberry-pi-2 | CC-MAIN-2017-04 | refinedweb | 670 | 62.48 |
[Ruby] The first year of new graduates considered the implementation method using TDD (ruby)
Why did the first year of new graduates pay attention to TDD?
Currently working as an engineer in the first year of new graduates. I had little engineering experience and was wondering if I could use my inexperience to beat my seniors.
Since I have little experience, I have no habit in the implementation procedure. I didn’t have any habits, so I thought I’d try to make good habits and try to implement TDD-based implementation (maybe there was a different way).
What I actually did
Try TDD development with Ruby on Rails Tutorial. sample_app
-
Test Driven Development. Clean code that works-How can we go there?-Takuto Wada | SeleniumConf Tokyo
Browse in-house TDD study session materials
What is #TDD Test-driven development. Before implementing it, write a test that only passes if the implementation is successful.
TDD image
Write test (before implementation) ↓ The test fails (because I haven’t implemented anything) ↓ Implement ↓ The test succeeds (since the implementation allows the tests I wrote before implementation to pass) ↓ Refactor. If the test was successful, we were able to fix the code without losing functionality. On the other hand, if the test fails due to refactoring, you have lost the functionality that you should have implemented, so debug it.
Merits and demerits of TDD
Advantages of TDD
- Safe refactoring
- Prevent bugs by adding features
Disadvantages of TDD
- Writing each time is a hassle
- I don’t know what test to write
TDD indicator
In order to suppress the disadvantages of TDD and make the most of them, it is necessary to have an index of when to use TDD.
- Simple test (eg whether request is successful) → TDD
- The stage when the operation and development contents are not completely determined → Development is first
- When you want to confirm development and validation that are important for security → TDD
- I want to refactor → Write a test first and check if the test passes even if refactored
In short, the basic TDD is good, but if you don’t know how to write a test, you should not do TDD.
Example of test when doing #TDD
Here is a specific example of a TDD test. This is an explanation using various test types (model, controller, integration, etc.), so if you do not know the test types, please see here. rails test directory structure and test process
Simple test
Whether it succeeds when requested or the existence of necessary elements is confirmed (simple controller test). It is good to write a simple confirmation test first
controller_test.rb
# Controller test test "should get home" do get root_path assert_response :success is # Is the request successful? assert_select "title", "HOME TITLE" # Is the title of the page HOME TITLE? end
Validation
First write a test that succeeds if validation is successful. (Example: integration test to check the result of model test, form submission)
- When checking validation for model class
user_test.rb
# User model testing # Write the following test before describing validation contents in user model def setup @user = User.new(name: "Example User", email: "[email protected]", password: "hogehoge", password_confirmation: "hogehoge") end test "should be valid" do assert @user.valid? end test "name should be present" do @user.name = "" assert_not @user.valid? end test "email should be present" do @user.email = "" assert_not @user.valid? end
- When confirming the validation for transmission using form (new registration, login, etc.)
users_signup_test.rb
# Test new registration (integration_test) # Test that wants to fail when trying to register with invalid parameters test "invalid signup information" do get signup_path assert_no_difference'User.count' do post users_path, params: {user: {name: "", email: "[email protected]", password: "foo", password_confirmation: "bar" }} end end
TDD for function modification (debugging)
There is no particular error statement, but I wrote a test first when it is not working as I expected. Write a test that only passes when it works the way you want it, and then modify the code to make it work the way you want.
The logout process will be described as an example. I referred to Ruby on Rails Tutorial 9.14 (Two inconspicuous bugs).
Login_controller.rb
class LoginController <ApplicationController . . def destroy # action to log out log_out redirect_to root_url end private # After this, the definition of the function used in the action # Destroy a persistent session def forget(user) user.forget # Empty user's login remember token stored in DB cookies.delete(:user_id) #empty the contents of cookies cookies.delete(:remember_token) end # Log out current user # Empty the memory token of current_user that indicates the current user, and empty the contents of session and variable current_user def log_out forget(current_user) session.delete(:user_id) current_user = nil end end
With this code state, Open two tabs in your browser and in each tab Suppose you are logged in as the same user.
First, log out in one tab. Then the value of current_user becomes nil. After that, when I try to log out on the other tab, the current_user is nil and it becomes nil.forget, which results in an error.
Such a situation can be said to be “when there is no particular error statement, but it is not working as expected,” so TDD is performed. I write a test to check if the operation is normal when I log out twice.
users_login_test.rb
def setup #login define user instance @user = User.new(name: "Example User", email: "[email protected]", password: "hogehoge", password_confirmation: "hogehoge") end test "two times logout after login" do log_in(@user) delete logout_path assert_not is_logged_in? assert_redirected_to root_url # Simulate that you logged out in another tab delete logout_path follow_redirect! #If redirected, check if the page is the one before login assert_select "a[href=?]", login_path #login Check if there is a path for the page assert_select "a[href=?]", logout_path, count: 0 assert_select "a[href=?]", user_path(@user), count: 0 end
Make changes to the functionality to pass this test. Finally adding the following code will pass the test.
Login_controller.rb
class LoginController <ApplicationController . . def destroy # action to log out log_out if logged_in? Use the logout function only in the #login stateredirect_to root_url end end
Tips/Others
In this book, it was written that at least test creation → provisional implementation → triangulation → refactoring should be done.
This time, the TDD for creating a function that returns the input argument (int) as a character string is taken as an example.
minimum test creation
Create a test for the function (not yet created). The test is red when created.
test "Pattern1 of test returnString function" do a = returnString(1) assert_equal a, "1" end
Temporary implementation
Define a function that will pass the test described above in any way. The function definition makes the test green.
def returnString(int) return "1" end
Triangulation
Create multiple tests of the same type (two this time), and rewrite the implementation to a general form so that the test of the Lera passes.
- First add another similar test
When I create this test it becomes red. Because the current implementation only returns “1”.
test "Pattern2 of test returnString function" do b = returnString(2) assert_equal b, "2" end
- Change the implementation to a general form
Change the function that returns a raw value of 1 to a function that returns the value received as an argument as a string. The test will be green.
def returnString(int) return "#{int}" end
Refactoring
Even if TDD is performed for the minimum implementation as described above, the code will become more complicated as the number of functions to be added increases. but it’s okay. Thanks to writing the tests, you can determine the exact functionality of the code. Therefore, you can refactor with confidence.
What I felt when I practiced #TDD
You can think of what you want to implement by doing TDD. I felt it made the implementation efficient. Also, I felt that it would be easier to maintain the maintainability of the code because the refactoring can be done with confidence by writing the test.
Not limited to TDD, when you write a test, you do not have to try to write multiple types of test contents at once. If you try to write at once and the content of the test itself is wrong, it will fall down
It is not a good idea to over-write the contents of the test, even if it covers the contents above. If you write too comprehensively (check thoroughly the existence of HTML elements, etc.), it will increase the effort to maintain and modify the test itself if there is a change in the function in the future.
Not limited to TDD, when writing a validation test, you must write both the test when you want it to work and the test when you do not want it to work.
users_signup_test.rb
# Test new registration # Test that wants to fail when trying to register with invalid parameters test "invalid signup information" do get signup_path assert_no_difference'User.count' do post users_path, params: {user: {name: "", email: "[email protected]", password: "foo", password_confirmation: "bar" }} end end # Test that you want to succeed by trying to register with valid parameters test "valid signup information" do get signup_path assert_difference'User.count', 1 do post users_path, params: {user: {name: "Example User", email: "[email protected]", password: "password", password_confirmation: "password" }} end end
By writing * raise, an error is intentionally generated in that situation. If you write raise but all the tests pass, it means that the part that describes raise has not been tested. Write raise in the conditional branch (if statement) and check whether the branch is tested. Write a raise on the suspicious part to see if it is being tested.
controller
def create if (user_id = session[:user_id]) . . else If the raise # test passes, you know that this part has not been tested → write a test so that this part is tested . . end end
References
Ruby on Rails Tutorial [Test Driven Development]( Clean code that works-How can we go there?-Takuto Wada | SeleniumConf Tokyo | https://linuxtut.com/the-first-year-of-new-graduates-considered-the-implementation-method-using-tdd-(ruby)-ab1d5/ | CC-MAIN-2020-45 | refinedweb | 1,658 | 53.81 |
Python package collecting commonly used snippets for the Keras library.
Project description
Python package collecting commonly used snippets for the Keras library.
How do I install this package?
As usual, just download it using pip:
pip install extra_keras_utils
Tests Coverage
Since some software handling coverages sometime get slightly different results, here’s three of them:
is_gpu_available
Method that returns a boolean if a GPU is detected or not.
from extra_keras_utils import is_gpu_available if is_gpu_available(): print("Using gpu!")
is_multi_gpu
Method that returns a boolean if more than one GPU is detected.
from extra_keras_utils import is_multi_gpu if is_multi_gpu(): print("More than one GPU is available!")
set_seed
Method to get reproducible results.
from extra_keras_utils import set_seed set_seed(42) # set as seed 42, the results are nearly reproducible. set_seed(42, kill_parallelism=true) # set as seed 42, the results are fully reproducible.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/extra-keras-utils/ | CC-MAIN-2019-47 | refinedweb | 160 | 58.48 |
My 5 Favorite IDEA Shortcut Key Combinations
- Control-n: find class
- Control-q: quick Javadoc
- Control-Alt-v: introduce variable
- Shift-F6: rename
- Control-Shift-Space: smart completion
What are your favorite IDEA shortcuts?
Upgraded To Pebble 2.0.1
For those of you reading this from an aggregator, take a look at the new look here.
Please let me know if you encounter any difficulties.
XQuery, XSLT 2 and XPath 2 Are W3C Recommendations
The World Wide Web Consortium:.
When I wrote my JNB article on XQuery 1118 days ago, the specs were in final call status. I didn't realize then that it would take another three years for them to become W3C Recommendations.
(: Well, better late than never. :)
Kawa 1.9.0 Released
Per Bothner on the Kawa mailing list: I've released Kawa 1.9.0, and updated the web pages. Please try the .tar.gz and/or .jar and let me know of any problems. Here is the list of recent (and not-so-recent) changes.
For people who are not familiar with it, Kawa is language development framework for the JVM together with an implementation of the Scheme programming language and an implementation of the XQuery language. It compiles these languages into Java bytecodes.
It is in Kawa that I first saw the technique of compiling scripts from the interactive shell into Java bytecode and then load and execute them.
Java
Thursday Java Generics Quiz
I'm going through O'Reilly's Java Generics and Collections by Maurice Naftalin and Philip Wadler to gain the necessary understanding of Java generics to effectively use it. I mentioned this book 53 days ago.
Here's something that surprised me. Can you guess if the following two classes will compile? And why?
import java.util.*; class Foo { public
Integerfoo( Listlist) { return null; } public Stringfoo( Listlist) { return null; } }
import java.util.*; class Bar { public
Integerbar( List<Integer>integers) { return null; } public Stringbar( List<String>strings) { return null; } }
Innovation.
Tomcat 5 on Fedora Core 6: In Five Easy Steps
Have you noticed anything different about this blog in the past 24 hours?
I moved it to my newly upgraded Fedora Core 6 workstation yesterday. And I'm happy to report that setting Tomcat up on a Fedora Core 6 is extremely easy. I'll outline the steps I took to set up a functional Tomcat 5 server:
Install.
Install Tomcat
Tomcat 5 is included in Fedora Core 6, but not installed by default. So I have to bring it in from the repository:
[root@gao]# yum install tomcat5 tomcat5-webapps tomcat5-admin-webapps
This installs Tomcat 5.5.17 and a lot of their dependencies onto the system.
Hook up Tomcat 5 with Apache Web Server
Since I don't want my users to have to type ":8080" all the time, I went ahead and hooked up Tomcat 5 to httpd. In the past, this step had been the most confusing. I remember spending days browsing through Tomcat's website trying to figure out which one of the three alternatives offered there is the one that worked. I still have the mod_jk.so that I compiled from CVS source somewhere on by backup CD-ROMS.
In Fedora Core 6, things are quite straightforward because the version of httpd included contains mod_proxy_ajp which allows httpd to talk to Tomcat 5 through the AJP protocol, which Tomcat 5 listens to on port 8009. To make the connection, I edited /etc/httpd/conf.d/proxy_ajp.conf so that it reads (excluding comments):
[root@gao]# cat /etc/httpd/conf.d/proxy_ajp.conf LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass /blog ajp://localhost:8009/blog
Turn on the services
To have httpd and Tomcat 5 start automatically upon reboot, I went to the System->Administration->Services menuitem and enabled both the httpd and tomcat5 services.
Now I have a working Tomcat 5 server. It's webapps directory is at /usr/share/tomcat5/webapps.
Install Sun JDK
So far we have been using the default Java package that came with Fedora Core 6, which is a Free Software implementation of Java based on GNU Gcj, GNU Classpath, and the Eclipse compiler. To run the widest set of Java applications and servlets, I need to install the Sun JDK.
The best way of installing Sun JDK on Fedora Core 6 is to follow the JPackage.org method. I wrote an Introduction to JPackage.org 782 days ago. Things haven't changed much since then (except that now Fedora Core includes a bunch of Java packages, which makes my life easier).
To follow the JPackage.org method, I need to make sure that the following packages are installed:
[root@gao]# yum install jpackage-utils rpm-build libXp unixODBC unixODBC-devel
The first two were already installed when I checked. The others are needed by the JDK.
Then I need to download the JDK (Linux self-extracting file) from Sun's website.
Now we are five command lines away from success:
[root@gao]# rpm -ihv\ non-free/SRPMS/java-1.5.0-sun-1.5.0.10-2jpp.nosrc.rpm [root@gao]# cp jdk-1_5_0_10-linux-i586.bin /usr/src/redhat/SOURCES [root@gao]# rpmbuild -ba /usr/src/redhat/SPECS/java-1.5.0-sun.spec [root@gao]# cd /usr/src/redhat/RPMS/i586 [root@gao]# for x in java-1.5.0-sun-*.rpm; do rpm -ihv $x; done
[Update (Thu Apr 26 06:10:52 CDT 2007)] As Sun makes update releases of the JDK, JPackage.org updates their nosrc.rpm version number that is available from their archive. Since Sun does not cordinate its JDK release with JPackage.org, their will be some lag time before a new nosrc.rpm appears in the JPackage.org archive after each Sun release. As of yesterday, the first two of the above five commands should be updated to
[root@gao]# rpm -ihv\ non-free/SRPMS/java-1.5.0-sun-1.5.0.11-1jpp.nosrc.rpm [root@gao]# cp jdk-1_5_0_11-linux-i586.bin /usr/src/redhat/SOURCES
This One Is For Brian, Brian, Brad and David And Anyone Who Bought AppleTV Last Week
Jim Allchin (Co-President, Platforms & Services Division, Microsoft): I would buy a Mac today if I was not working at Microsoft.
Sc.
GMail Fun
Sun Java Designers Distancing From Property Syntax
It took 34 days since the initial posting (and my reporting), but it seems that some of the Java language designers and other Sun folks (Ahé, Buckley, Muller) are moving away from the syntax for properties.
I believe Hans Muller's post asked the most fundamental question: What are properties for?
And if I remember correctly, the properties concept came into Java through the JavaBeans specification in JDK 1.1. It was supposed to be an abstraction layer of the underlying mechanisms for GUI builders, like Microsoft's COM. Properties, together with the concept of methods and events form the foundation of a GUI component system. And of particular utility are a couple of combinations of properties and events: the bound properties and constrained properties.
So properties, in that context, does something useful. For example, when you set the text property on a Label, it will have go find the appropriate glyphs from its font and erase the old text and paint the new text.
Somewhere along the line, the concept of properties were subverted. It became a mindless chant:
// Exhibit A public int foo;is not OO! Change it to
// Exhibit B private int foo; public void setFoo(int value) { this.foo = foo; } public int getFoo() { return foo; }to make it OO. And along the way, we, with our favorite IDEs as accomplices, managed to bloat the world total Java code base by about 10% (maybe more).
It is these IDE-generated, boilerplate, pointless bloat that most people hate. And we wish we can do
a.foo = b.foo;instead of
a.setFoo(b.getFoo());
Well, there is an obvious solution to this (minor) gripe. Can you guess what it is? You're right. You can accomplish this goal without addition to the Java programming language. You can simply turn all instances of Exhibit B into Exhibit A. There should be a refactoring called "Remove frivolous getters and setters" that does it automatically.
Go ahead and do it. Should take about ten minutes.
You done? Did it break your app? Probably not.
Do you have any getters and setters left? If not, you're done. And you can go home now.
The remaining getters and setters are the true properties. Now stare at them for a minute. (Here's an example from the JDK:
public void setNextFocusableComponent(Component aComponent) { boolean displayable = isDisplayable(); if (displayable) { deregisterNextFocusableComponent(); } putClientProperty(NEXT_FOCUS, aComponent); if (displayable) { registerNextFocusableComponent(aComponent); } }from javax.swing.JComponent.)
Still feel like some syntactic sugar for properties? Go ahead and add some. (Here's a tip: the Microsoft folks have property support in C# for six years. I wouldn't mind that syntax!)
Finally Upgraded To Fedora Core 6
It has been 79 days since Fedora Core 6 was released. I finally got a chance to upgrade my work station from FC5 to FC6.
As in the past, I downloaded the FC6 x86_64 DVD using Bittorrent, and started my "try upgrade first, if that doesn't work, do a fresh install."
The upgrade worked. It took about 2 hours for all the new packages to be installed from the DVD to the hard drive. And then it took a full day to pull down all the accumulated updates using "yum update" from the Fedora mirrors. As in the past, I relied upon the Fedora Core 6 Tips & Tricks website to get the multimedia stuff installed.
Improvements that I've noticed include:
- The Istanbul Desktop Session Recorder is improved tremendously. Last time I tried it, it was very primitie. The version in FC6 feels a lot smoother, and did not use a huge amount of memory or CPU as the FC5 version did.
- I just noticed the Cups-PDF printer driver that allows one to print anything to a PDF file. (This might be already present in FC5.)
- The GL-accelerated desktop effects are cool. My windows wobble when moved, and my workspaces are on a cube (that rotate in 3D) now. It also has something similar to the Mac OS X Expose.
- The Gobby collaboration editor looks like it should be useful. It supports distributed file editing with chat.
- I switched to 32-bit Firefox just to get Flash working. I can watch videos on YouTube now.
- And as you can see in the screenshot below, Qemu still works.
Beatles Songs Come To iTunes Music Store
Eliot Van Buskirk and Sean Michaels: [...] Almost lost in the shuffle, though, is another piece of huge news—the fact that the Beatles catalog is now (or soon will be) available the iTunes store. Congratulations, Steve, I know you've wanted this for a long, long... long time.
AJaX Ruined My User Experience...
...on Bloglines.
Something bothersome has been happening to Bloglines—the on-line Blog aggregator I have been using for more than three years.
It has to do with the left-nav pane, where my subscriptions are listed and ones with new posts are shown in boldface with the number of new posts displayed in parentheses, much like in any standard email client. As I work through the blogs by clicking on each subscription, the posts would show up on the right pane, and the boldness would go away. Again, like in typical email clients.
In the old days. The left-nav pane would refresh itself every so often, with updated information reflecting any new posts made since the last refresh or my last interaction, whichever is more recent. During the, maybe, 200 microseconds of the refreshing, I simply can't click on the left-nav pane.
Some time during 2006, Blogline introduced AJaX to the left-nav pane. Instead of refreshing itself, the left-nav pane automatically updates itself with information that it apparently has gotten from the Blogline home asynchronously.
This, by itself, is a good thing. It reduces the server load on Bloglines. And it also reduces the overall internet bandwidth usage.
Except when it clashes with me, the user, clicking on a subscription, which is a daily occurrence. And it goes like this:
I saw a subscription with new posts:
Weiqi Gao's Observations (3)
I click on it. The new posts show up on the right pane. But, instead of seeing the subscription turn to:
Weiqi Gao's Observations
The line flickers a bit and returns to the pre-click state of:
Weiqi Gao's Observations (3)
Apparently an AJaX packet arrived right after I clicked on the subscription. And it tells the left-nav pane that there are 3 new posts in the subscription.
Now the left-nav pane, contains wrong information. If I was in one of my rapid scan mode, the wrong false positive new post information could occur for a dozen subscriptions.
I'd rather not have wrong information displayed.
This is not something that would drive me to another service provider. And I know it is not good form to gripe about a fantastic free service. But I simply cannot hold it in me any longer.
The problem should not be hard to fix: simply discard any asynchronous updates if I have interacted with the left-nav pane within the last two (or one or three) seconds. I hope someone at Bloglines is listening.
I Bet You Haven't Seen Java.com For A While, ...
Thursday Java 6 Quiz
Fill in the blanks:
As of Java SE 6.0, the deprecated ____________________ have been removed, instead ____________________ are to be used.
Happy New Year
Happy New Year 2007! | http://www.weiqigao.com/blog/2007/01.html | crawl-001 | refinedweb | 2,301 | 65.93 |
This is part of a tutorial series on creating extension components for Design Studio.
Update! Parts 1.9, 1.10c and 1.10d are obsolete as of Design Studio 1.6. They will still work and we recommend that you go through the tutorial exactly as it is written here. Please refer to the comments for more information.
Currently, there is no project wizard for creating Design Studio extensions. Likewise, creating projects by hand from scratch is not recommended – due to complexity of configuration. Yes, it is theoretically possible to create an Eclipse plugin project by hand, that attaches to the Design Studio extension point, but it is an order of magnitude more complex that what we’ll be doing here. The normal, recommended workflow is to start with an existing project, clone it and modify the clone. In this instalment, we’ll create our gauge project, using the clone method. We will have to do some Eclipse Plugin housekeeping and some Design Studio Extension specific housekeeping in this instalment; but after this, you should be able to develop content.
Step 1.1 – Copy/Paste a project (e.g. com.sap.sample.coloredbox) and rename it. Let’s call it “com.sap.sample.scngauge“.
Mea Culpa: I spelled gauge incorrectly in that screenshot. 😉
Step 1.2 – In the new project, open the file ../res/additional_properties_sheet/additional_properties_sheet.html in the editor and empty it. At a later time, we’ll be creating a new Additional Properties Sheet (APS) page from scratch.
Step 1.3 – Open the file ../res/additional_properties_sheet/additional_properties_sheet.js in the editor and empty it.
Step 1.4 – You should be staring at an empty additional_properties_sheet.js Before we starting adding anything back to it, let’s take a moment to explain the method to the madness. Behind the scenes, in the SDK framework, there is a Javascript class, called sap.designstudio.sdk.PropertyPage. This class will contains a function called subclass(). This subclass function is your hook for modifying sap.designstudio.sdk.PropertyPage. The constructor takes two input variables; a “class name” and an anonymous function. Behind the scenes, the Design Studio will be monkey patching your code into an instance of the master class, but the dev team decided to try and increase the usability of the SDK and not force you to do it yourself.
The “class name” variable can be anything you want. By convention, the SDK samples follow the convention of using “<ComponentName>PropertyPage“, so that’s what we’ll use here and call our class “com.sap.sample.coloredbox.ColoredBoxPropertyPage“, but you are free to call it “Ralph” if that pleases you.
The anonymous function will be the subject of future discussion, but we’ll just leave it “blank” (i.e. function() {} ) for now. If you don’t know what anonymous functions are, here is a good article on the subject. Tl:dr; “Anonymous” functions are functions that you stick in place of variables; sort of a “here, execute this code” thing.
The file should look like:
sap.designstudio.sdk.PropertyPage.subclass(“com.sap.sample.scngauge.SCNGaugePropertyPage”, function() {
});
Step 1.5 – Leave the component.css alone for now 😉 We’ll cover css usage in a later instalment.
Step 1.6 – Editing the manifest file is part of the Eclipse Plugin housekeeping that you must do. In the manifest file (../META-INF/MANIFEST.MF), edit ID and Name.
ID = com.sap.sample.scngauge
Name = SCN Tutorial Gauge
Leave the version alone for now
Step 1.7 – Design Studio extensions can include new script commands. Open the file ../contribution.ztl. ZTL stands for Zen Template Language (Zen being the original development codename for Design Studio). This file is where we’ll be defining our gauge components BIAL script functions in future instalments. For now though, we won’t be defining any functions, but we’ll go over the basics of how it is structured.
Digression:
Suppose that we have an extension, named com.sap.sample.myextension, that this extension contains a component, called “MyComponent” and that MyComponent has a string property called “something“. Suppose as well, that we’d like a getter function, in BIAL, allowing us to get the value of something in our scripts and that we’ll want to call this function getSomething().
class com.sap.sample.myextension.MyExtension extends Component {
/* Comment. */
String getSomething() {*
return this.something;
*}
}
The ZTL looks for a Java style class declaration for the component and matches it up with the proper component.
Comments within this “class” become viewable in the BIAL editor’s value help.
The ZTL parser also searches within the class for function declarations (they have the format <Returntype> <FunctionName()> and monkeypatches the new function into the ZTL library for this component type.
You might notice the “{* <JavaScript> *}” syntax. The curly-brace/star combination is the escape sequence for nesting Javascript code into the function.
For now, simply declare the “class”, but don’t add any BIAL functions.
class com.sap.sample.scngauge.SCNGauge extends Component {
}
Step 1.8 – Right click on ../res/icon.png and select Delete.
Download the attached file, gauge.png. Copy it into your ../res folder. Right click on the project and select Refresh.
There should now be one file in the ../res folder, gauge.png.
Step 1.9 – (Update! Still works but is deprecated. See comments) We need to add our JavaScript bootstrapping infrastructure. Design Studio’s SDK framework monkey patches the runtime, client-side Javascript code the same way that it does with the additional properties sheet. The only real difference it that it is sap.designstudio.sdk.Component that will be subclassed this time. ../res/js/component.js is where we’ll be putting our client side Javascript code in. This file will be the core of any nontrivial extension. For starters, it should just have a subclass declaration, with a name and an empty anonymous function as parameters.
sap.designstudio.sdk.Component.subclass(“com.sap.sample.scngauge.SCNGauge”, function() {
});
Before we continue, we want to a small edit to our component.js file. Itis an acknowledgment that during the usage of this code, the scope may not always consistent. this may refer to the component, but we can’t always be certain. We do know that this is the component at bootstrap time. So we add the line “var that = this;”. Now, we can refer to that and know that it is the component, even if this is not anymore. Our component.js should now look like this:
sap.designstudio.sdk.Component.subclass(“com.sap.sample.scngauge.SCNGauge”, function() {
var that = this;
});
Step 1.10 – Last, but certainly not least (some of the other steps were prerequisites), we start working on the the extension and component metadata. To do this, we’ll need to edit ../contribution.xml. You can use the Eclipse XML editor’s design mode if you want, but we’ll be using the Source tab and hand editing the xml.
Step 1.10a – Define the main, Extension metadata; the attributes of the sdkExtension element.
eula= We won’t bother with an EULA, as it is outside the scope of this tutorial series.. You can leave this attribute blank, remove it or write whatever you want.
id= It is very, very, VERY important that the id here matches the ID defined in the MANIFEST.MF file. You can copy and paste. If the two Ids don’t match, your extension won’t work. Note – If you extension won’t load, this attribute (and title, below) should be the first place you check when troubleshooting.
title= The same issue with ID is also present with title/name. The title attribute in contribution.xml must be an exact match to the Name property in MANIFEST.MF, or the extension won’t load properly. Again, you can copy and paste.
In contribution.xml, edit id and title
vendor= and version= can also be copied over from MANIFEST.MF.
Xml namespace information can simply be left as is.
Step 1.10b – Our extension will have a single component, called “SCNGauge”. We will define the metadata for this component as attributes in a component element.
databound=”false” Setting this attribute to true, bunds the component to a data source and we don’t want to do this just yet.
group= can be left blank for now. We’ll add a group later. If no groups have been defined, the component will land in a group called Custom Components.
handlerType=”div” There are multiple options.
- Use “datasource” if the component that you are building is a custom data source. Since this project is a visual component and not a datasource, we won’t use this option.
- Use “sapui5” if the component is going to be using the SAP UI5 library. I’m a big fan of UI5, as it allows you to quickly build complex user interfaces. In this project, we’ll be running a closer to the metal, so we won’t be using UI5. Note – as of 1.5,we only support the commons (desktop) version of UI5 in SDK extensions and not yet m (Mobile).
- Use “cvom” if it is a chart that uses the CVOM (Lumira Visualizations). This feature will be available in 1.6.
- Use “div” for anything else. This provides you with a div container in the app and you can do anything you want inside it. Since we will be building our gauge with the D3 library, we will use div.
icon=”res/gauge.png” This is the image file that we added in step 1.8.
id=”SCNGauge”
newInstancePrefix=”SCNGAUGE_INSTANCE” If you don’t want the component aliases of your gauges in Design Studio apps to be SCNGAUGE_1, SCNGAUGE_2, etc. and prefer them to be called <somethingelse>_1, <somethingelse>_2, etc., you can define that here. If you leave this property in, but blank, your component aliases will be _1, _2, etc. Let’s add a value, such as “SCNGAUGE_INSTANCE”. (You can also remove it entirely and its alias will be SCNGAUGE_1, SCNGAUGE_2, etc.)
propertySheetPath=”res/additional_properties_sheet/additional_properties_sheet.html”, presuming that is what you called in in step 1.2.
title=”Gauge” This is the title that the designer will see in the pallette.
tooltip=”” You can leave this blank, or write a novel. If this was anything other than a tutorial, we’d be sure to enter a helpful clause or sentence.
visible=”true” We certainly don’t want our gauge being invisible.
Step 1.10c – (Update! Still works but is deprecated. See comments) Insert an stdInclude element, before the jsInclude element. This element allows you to tell the DS SDK that it needs to include a standard library. You don’t need to add a stdInclude element for Jquery. JQuery is always included. Likewise, if sapui5 is the handler type, the ui5 library is also automatically included. Note that if you want to use D3, you should include it via stdInclude. You might be tempted to include it via jsInclude (see 1.9d, below), in order to get the latest and greatest version of D3. You’d regret doing this, as D3 does not play nice with Require.js. The D3 copy included in the DS SDK has been modified to play nice with Require.js.
Javascript was not originally meant to be a normal programming language, but was always intended to be self contained scripting for web pages. As such, it does not include any include/import mechanism of pulling in libraries on its own and requires the web browser to handle library imports (via the HTML script element). Require.js brings the classic “include/import”, common to most programming languages.
Step 1.10d – (Update! Still works but is deprecated. See comments) Leave the jsInclude and cssInclude elements as is. You will only need to change their values if your JavaScript file is not component.js or your CSS file is not component.css. If you wish to import another library – that is not part of the DS SDK Framework, then you can add one (or more) additional jsInclude element(s).
Step 1.10e – Remove the property elements. We’ll be adding properties in later instalments, but for now, we’ll only use default properties.
Step 1.10f – Look in the initialization element, at its defaultValue child elements. WIDTH and HEIGHT are default properties, inherited from the master component template. You can leave them as is, or change their values. Delete any other defaultValue elements, if there are any.
Step 1.10g – If you are in source mode (at the bottom of the editor, the Source tab is selected), your component.xml should now look like the following:
<?xml version=”1.0″ encoding=”UTF-8″?>
<sdkExtension
eula=””
id=”com.sap.sample.scngauge”
title=”SCN Tutorial Gauge”
vendor=”SAP”
version=”15.1″
xmlns=””
xmlns:xsi=””
xsi:schemaLocation=” “>
<license>license</license>
<component
databound=”false”
group=””
handlerType=”div”
icon=”res/gauge.png”
id=”SCNGauge”
newInstancePrefix=”SCNGAUGE_INSTANCE”
propertySheetPath=”res/additional_properties_sheet/additional_properties_sheet.html”
title=”Gauge”
tooltip=””
visible=”true”>
<stdInclude kind=”d3″/>
<jsInclude>res/js/component.js</jsInclude>
<cssInclude>res/css/component.css</cssInclude>
<initialization>
<defaultValue property=”WIDTH”>100</defaultValue>
<defaultValue property=”HEIGHT”>100</defaultValue>
</initialization>
</component>
</sdkExtension>
If you are in design mode, it should look like the following:
Step 1.11 – Now you are ready to test your project. Right click on the project. Select Run As -> Eclipse Application. Design studio should now be launched, so that you can debug the component.
Step 1.12 – When Design Studio starts up, create a new application. Name your app SCNGAUGETEST and click on Finish. It should be noted here that your normal workspace is not being used, but rather a workspace associated with Eclipse. This means that you won’t be able to see apps that you created when you started Design Studio normally and apps created within debug sessions (i.e. you started Design Studio from Eclipse) won’t be visible when you start Design Studio the normal way.
Step 1.13 – Look in the Custom Components group of the palette (remember, we did not define a groups or assign the component to any specific groups yet). Here you will find a green arc, with the label Gauge.
Step 1.14 – Drag the new Gauge component into the canvass and have a look at it in the properties sheet. Here you should see all of the default properties that your component has (and that most of the default properties are bindable).
Step 1.15 – Run the app. Your component won’t be visible, as it is just an empty div, but it should be error free. Now your are ready to start filling in the blanks and building an actual component.
The completed extension (as of part 1) is available as a Github repository.
Next Instalment: Part 2a – Your first Steps with D3
David,
I gotta say that I know this took a lot of effort to write out, so THANK YOU, and I’m going to send people to this series of tutorials first and if they still do not get it, they cannot be helped!! 🙂
This is a terrific walkthrough with explanation along the way of the “why” are we doing each step that has to otherwise be inferred from the minimal (by intent, I understand) SAP Help developer guide.
I wish I’d had this 2 years back!
Hello David.
The idea of such blog is great, really.
I have a question tho.
In this blog post, Reiner is talking about an experimental feature that allows DS to take extension directly from a .jar file (that is the extension in itself), instead of creating a deployable package with Eclipse. I haven’t tested the feature but it was said as experimental, is this something planned to be stable soon?
I might seems to be something light but it could actually remove hindrances faced from automated package building as it relies on unofficial Maven plug-in, custom Ant scripts, … for example.
Br,
Vincent
We still have to make sure that it is stable, before it can be released. It would certainly make development a LOT easier.
I double-checked everything I did to make sure I followed the steps exactly. When I try to run the Eclipse Application, DS starts up but I do not have the option to create a new application.
I get these messages in Eclipse. I do not know how to debug to find what the issue is. Any help would be appreciated. This might not be anything but I installed Eclipse Mars but when DS starts up it is in Eclipse Luna.
!MESSAGE Could not bind a reference of component com.sap.ip.bi.base.application.DeclaredServiceActivator. The reference is: Reference[name = IActivatorBase, interface = com.sap.ip.bi.base.bundle.IActivatorBase, policy = dynamic, cardinality = 0..n, target = null, bind = addRICActivator, unbind = removeRICActivator]
!MESSAGE Unable to create class ‘org.eclipse.ui.internal.e4.compatibility.CompatibilityView’ from bundle ‘442’
Thanks.
Sandy
I found a post related to my issue. I am going to check to make sure I have the correct 64-bit or 32-bit versions.
i’m running into the same problem that you ran into Sandy, however I’m running Luna for both development and DS.
Here are the exact messages:
Development Eclipse Environment:
Version: Luna Service Release 2 (4.4.2)
Build id: 20150219-0600
DS Eclipse Environment:
Version: 4.4.2.v20150204-1700
Build id: M20150204-1700
Windows 10 x64 bit
DS: 15.1.2
Were you able to find a fix for this?
Pete
Regarding the “deprecated” update. In Design Studio 1.6, we introduced a more modern alternative to cssINclude, stdInclude and jsInclude (see parts 1,10c and 1.10d). This new requireJS element is something of a “one element to rule them all”. As this instalment was written on 1.5, the method listed was current when it was written.
It is important to note that we did not delete the old way of doing it. We deprecated it. This means that it is still ok to use the “old” way and we recommend following it through the tutorial to lessen your confusion. I wanted to finish the functionality of the gauge before revisiting the include statements and this is how this tutorial proceeds. Part 13 of this tutorial series details the migration to requireJS. | https://blogs.sap.com/2015/09/29/your-first-extension-part-1-project-creation/ | CC-MAIN-2019-35 | refinedweb | 3,053 | 58.79 |
I’m writing a piece of code for a scoring system, however, every time I try to run it nothing happens, can anyone help?
My code:
def scoringsystem(): print("Welcome to the menu") print() a=True while a==True: print("Would you like to input for Individuals i or a Team t") answer = input("Enter here : ") if answer == "t": event = input("What event did they do?") placement = str(input("What position did they come? ")) if placement == "1st": print ("They got 3 points!") if placement == "2nd": print ("They got 2 points!") if placement == "3rd": print ("They got 1 point!") else: print("They got zero points, sorry.") if answer == "i": event = input("What event did you do?") placement = input("What position did you come?") if placement == "1st": print ("You got 3 points!") if placement == "2nd": print ("You got 2 points!") if placement == "3rd": print ("You got 1 point!") else: print("You got zero points, sorry.") answer1 = input("Would you like to input another score s or exit e ?") if answer1 == ("s"): print ("") else: a=False | https://forum.freecodecamp.org/t/nothing-happens-when-i-run-my-code/465066 | CC-MAIN-2021-31 | refinedweb | 173 | 86.2 |
How can I get this graph:
(from Speeding up StackExchange)
It shows what percentage of web requests took longer than N seconds. I have IIS logs and LogParser installed but not sure how to proceed.
This is how you can do it with LogParser and Excel:
Step 1 Create the following query and save it as "Time taken graph.sql":
SELECT QUANTIZE(time-taken, 100) AS t, COUNT(*) as count
INTO 'Time-taken-graph.csv'
FROM u_ex*.log GROUP BY t
Step 2 Run the query and export results to CSV file:
LogParser.exe file:"Time taken graph.sql"
Step 3 Open CSV file in Excel. I will use Excel 2010 as example.
Let's say your data sits in A1:B401 range:
Put "Time" in D1 cell. Put "Percent" in E1 cell. Fill time in D column with series starting from 0 to 5 with step 0.1:
Step 4 Put the following formula into E2 cell (you will need to replace 401 with your value):
=SUMIF($A$2:$A$401,">="&D2*1000,$B$2:$B$401)/SUM($B$2:$B$401)
Copy the formula to all cells in E column that have corresponding time value. Set style to Percent by pressing Ctrl+Shift+%
Step 5 Finally, build line graph based on the data in D and C columns:
I wrote a python program to generate that graph using the logs generated by our load balancer and flot to draw the actual graph.
I went through a couple of iterations before I decided on that graph:
I started with a scatter plot (response time versus time of day) which is informative in it's own right, good for getting a good feel for the shape and variance of your traffic even if it's not particularly good for communication.
Then I tried a histogram, which wasn't particularly useful because of the high variance.
Finally I ended up with this which is based on a histogram, but is cumulative and inverted.
I'd post the code but it is so specific to what I was doing that it isn't going to help anybody. So here's an approximation of the core function:
def point(times, cutoff):
"""
times: sorted list of response times
0 <= cutoff < 1
"""
size = int(len(times) * cutoff)
return (times[cutoff], 1 - cutoff)
You then plot the (x, y) coordinates as cutoff ranges over [0,1[ using your favourite plotting library.
(x,,377 times
active | http://serverfault.com/questions/132892/using-logparser-to-generate-average-response-time-graph | crawl-003 | refinedweb | 409 | 69.72 |
Transcript
My name is Aarti Parikh and today I'm here to talk about Go, and why it's a really important language for enterprise development today. Everybody is being here in this room. It's pretty crowded I think. It's the largest audience I've ever spoken to. So I'm pretty excited to be here. I am a new engineering manager. Before that I mostly worked in startups. For the last three years I've been writing in Go. Then I started the two meetups that Ashley mentioned in Silicon Valley..
Now enterprises are starting to pay attention. They're like, ''Oh, we also want to do this. We can save money. And also we can find good developers.” People say history repeats herself. I don't think it repeats itself. I think that we have all of these bad things that happen in history, and then in the next generation, we try to fix them in some way, but I don't know if you actually fix them, or we make more of a mess or whatever. We're doing something. So it happens in programming languages too. I love Mark Twain, so I have his quote up there.
Going back in history: this slide is actually going back to 1970. was alive, but I wasn't writing code in 1970. But all these companies were writing business applications. So this is an enterprise application development pitch. Go is good for an enterprise application development. And so when we talk about that, we're talking about invoicing and payroll and finance and all of those things, that business logic stuff, that was running on Cobol based systems on proprietary systems and then came the open systems, the open architecture and X3H6 and all of that. A lot of C and C++ was being written, but it wasn't taking on for the business application needs.
There was this like, “Oh, this is causing, this is too hard or this has these problems”. This again, like Ashley said, isn't about which language is great or what, which language is good for which system that you're writing. If you're writing low-level stuff, C, C++ were great, but they weren't great for business applications. And that's where Java's timing in the early 2000s was perfect. I've written a lot of Java. I raised my children writing Java. But yes, my daughter's taking AP computer science now and she has to study Java, so I'm helping her with recursion and all of that.
But why not C++? C++ is on the list of the most difficult programming languages in the world. It's number 10, but I think Haskell is number one for me. I've tried to learn Haskell many times. I have several books on bookshelves of FP books. But it's a hard language to learn. If you're been doing application development and you are in the enterprise application development world, it's not well-suited. Going back again, Java was like perfect for its time. At the time when Java came in, the early 2000 was late groundbreaking or like, “Oh we don't have to deal with C makefiles and auto conference from bill tools and write for the compile for this and compile for that and all of that.” So it changed things and memory safety, which was the earlier talk today, was important in this area because Java was garbage collected and memory safe are no pointers.
Innovation & Growth Early Years
This is a little bit of history. I have some history with Java. I was young, I was really excited about coding and I was doing all of this stuff. Every time a new article would come on O'Reilly- O'Reilly was like our water cooler, just like hacker news is today. So we're like, “Oh, this article came on hibernate and now everybody's excited about hibernate.” And Gavin Newsome, I don't know if anybody knows, remembers his name. Mark Flurry, who wrote Jay Boston, changed things because it was open source, and changed things from websphere. I don't know if anybody here knows those names.
But extreme programming, anybody? Anybody read that book? Ken, back. Oh yes, that was groundbreaking. Wesley’s scary. That book like changed my life, because J unit and unit testing and all of the innovation that happened during the Java years was really amazing, at least it was for me. Then design patterns and we were all innovating on all of these things; dependency, injections, spraying- we're still using spring. That is the sort of what enterprise application developers are using everywhere. Maven, how to do dependencies was beautiful. You had the Maven sitting locally on your local system and then you had all the layers.
I see the excitement around via SKO today, and it's a little bit of history because the eclipse plugin architecture reminds me exactly of what we as code is today, because I wrote plugins in Eclipse. I was so into all of this. It was a good time. And I was promoted. I was an architect and it was like, “Hey, I have an A in my title.” I was an architect for a healthcare startup and I was writing a lot of abstractions, that was the thing. I loved it. So I was already abstract. We are already abstracted from the JVM. You don't know what's happening on the hardware. Then we're writing frameworks upon frameworks. It was fun. It's fun to write, “Hey, I can just do this and all of this stuff happens on its own.”
So that was sort of my story. But what happened as a result, was engineers on the ground floor who were coming in, they were like, “This is tedious. This is verbose. Why am I doing all of this factory builder, blah, blah, blah?” To write a thread in Java was like six lines of code and executed a thread factory, blah, blah, blah and then something inside. So it was crazy. I felt like it was like object-oriented programming, extreme object-oriented programming, and heavyweight frameworks. Startup times were slow. You would just be sitting. How many times he was sitting and just watching j boss startup. We had this discussion at PayPal, sitting in the audience. We talked about how Spring Boot came to the rescue but still, Java was slow. And for concurrency things, Java's concurrency model is always threads map one-to-one, with Java threads. That was the limitation.
Don’t Underestimate Developer Happiness
Then I went to this conference and I met Dave Thomas and I read his book. Anyone, Dave Thomas, “Pragmatic Programmer”? Oh, my gosh. Only two, three people. So yes, Bruce Tate, he wrote this book, ''Seven Languages in Seven Weeks.'' And he tweeted this outside, “Dave Thomas ruined Java for me.” I saw that and I was like, “That's me. He ruined it for me too.” I read his book and then I read all of the Ruby books that came out and then it changed things for me. So my thing with enterprises- because this is an enterprise application developer conference- is that developer happiness matters. Don't underestimate it. I was unhappy. I was like, “Hey, I wrote all of this stuff,” but I wanted to write in a fun language that me happy and Ruby made me happy. Ruby on Rails at the time, convention over configuration that was another huge growth that happened in the community.
This is the time that I also got involved in communities where I found the Ruby on Rails and Ruby community to be amazing. I was like, “Wow, this is a little group of really passionate people, writing stuff, doing stuff that they're excited about.” That was a sort of my story. At PayPal, the story was node.js. So the VP who's Org I work in, Geoff Harold, he's kind of famous at PayPal for heralding this big change with node.js because Java- PayPal was all Java. Then he came and I sat with him once and he's telling me his story, like, "You can't tell this to anyone," so I'm not going to tell his story. But how he brought about that change is really cool. So if you ever come to PayPal and have a one-on-one with him, you should talk to talk to him about that. He was able to bring about big change at PayPal, and that developer velocity at PayPal just like boomed. You look up the Java to node.js story. That was a big part of it. In my discussion with him, developer velocity mattered. We wanted to ship products fast because we wanted to keep up with the competition. Not only that, but you see your score and running and that's important.
10 Years after the Rise of Dynamic Languages & Web Frameworks
But what happened after? I don't know how many people in the audience follow Gary Bernhardt? Yes, he's awesome. Yesterday he tweeted that he runs this conference Deconstruct Con, which is really cool. If you all get a chance to go there, and he does also live coding and all of this. He tweeted that, “Oh, my gosh, 10 years after the rise of these dynamic languages we're seeing fatigue.” There's all these security vulnerabilities in these things. There's all this dependent NPM install downloads, the whole internet on my local laptop. It's like, “Okay, this isn't fun anymore. What was fun 10 years ago is not fun anymore.” We're back to not being simple and fun anymore. More than that, I have this maze. This is the best image I could use to describe callback hell in my opinion.
Then, just as you're in the node event loop or whatever, and you're dealing with all of this, crazy, you can't figure out how your code is working. So because you have a state machine always to deal with, with all of these callback things, and it's not sequential like where you can just read your code and say, “Talk to Don, what's happening?” You have to go on these code paths.
How Should I Change My Code to Gain More Performance in Multicore Environments?
So that's what we're facing right now, people were facing. Then the other thing, people who were close to the hardware, like the C, C++folks always know more than you always, because they're close to the hardware. They're like, “You’re writing all this code in scripting languages but you're not using all the cores.” And that was the message that people were getting. “Hey, you're node.js is single-threaded and you're not using all the cores or global interpreter lock. How do you leverage that?” There was, of course, another framework, I think BM2 is what people use in the node world to scale on the different processes. But then again, framework. It's not in the language itself where you could use all of the processors.
So that was kind of the unease with that. And then now we are here and this is not my slide. I have attributed this to the person who created it. But I thought this was a really neat slide. We were running on hardware. Then we went to okay, easy two instances or whatever. Then we were like, “Hey, we just need something, even simpler, or get pushed Blah and my code runs.” And now we're like, “We just want to run something, and you all take care of all of the service, everything underneath. I just want my language to run. It doesn't matter what I'm running. If I ship you a minor, you should be able to run in on your server, somewhere.” And so serverless, it's not on.
I think that developers want this. They want even more simplicity. We're going to this function as a service serverless. I'm sure like a lot of people were in. Serverless it's this thing that people make fun of, but it's really about developer experience, because as an engineer, if I'm a product developer, in product development teams- they're called PD teams at PayPal - I just want to ship my code and I want to do a good job and I want the product to be good. So that's what's needed.
This is a code from Bala Natarajan. He heads all of bill, trust and release platforms at PayPal. It supports 4,000 developers and we have 2,500 microservices. It's kind of crazy. One code I heard from someone in the platform team at PayPal says like, if we sneeze PayPal shivers. So this really small core team supports lots and lots of developers at PayPal. There's a really great talk from our CTO from Google nest, where he talks about how PayPal operates at scale. So his thing is he's kind of like, “You know, I have seen so much in my life, Aarti. Developers keep shifting with line. Every generation comes and they want more simplicity, simplicity.” And he saw that with node.js. He's like, “We're definitely seeing another shift now where things just run somewhere and backends are running, front ends are running and we just need to make this work for developers.” That's the new paradigm. We need to simplify it to that. So I will end with that and I will take a quick water break.
Designers
It took me a while to get here, but I had to tell my journey, my story with Go. So the designers of Go, how the language was designed- it's something when I tell people, they're like, “Really?” They're like, “Yes, the person who wrote UNIX also wrote Go.” That's kind of a big deal. Somebody tweeted “Bell Lab refugees wrote Go”. I thought, I love that tweet. So yes, these people like performance. They like languages that are simple. Robin Grades, I was researching about him. It's like, “Oh my God, he wrote everything”. He was in the JVM, he wrote the JavaScript V8 engine, also wrote the chubby locks over, the cluster thing that makes Google work, all of the Google clusters work, in borger whatever. He wrote that too. I was like, "Oh my God." Then the face of the Go programming languages is Rob Pike. He's like the brainchild. He was also on the original Unix team and a lot of the Go ideas come from Plan9, some of them. He also rode UTF-8. It's not UTF-16 like JavaScript and windows and all of that. So it's in UTF-8.
Those are the designers. So it comes from that place. It comes from people who cared about performance, simplicity and wanted things to run. I was on Rob Pike's blog and I've read this blog like five, six times. I recommend you all read it too. The trigger for him for designing Go was, they were combining all of this C++ code in Google. And then he went to some meeting about C++, what they're doing to improve. The discussion was, “Hey, we're going to add these many features in C++.” He's like, "Really? Do you not really care about developers at all?" So again, people that are asking me about Go, I was like, developer happiness. That's what this talk is about. Developer happiness. So it matters all the way. People are unhappy and then go and build things in the languages they want.
I love to read like commentary on where ideas came from and I think that matters, because Go may not be the final destination. Who knows? I'm excited about Russ too and I'm excited about all this other stuff. But you want to know where ideas came from. To me it was really interesting, like, “Hey, we took C and we want to fix all this stuff.” But we were making it and it turned out to be something else. Then what they really wanted to do- because they were mostly programming in Python and C++. So that's what Go kind of looks like.
Simple
So it's simple. Go is a really simple language to start with. It has 25 keywords, low cognitive overload loop for everything. You don't have seven different ways to do something. I remember submitting poll requests for Ruby programs and I would get 10 different comments. “Oh, you can concise it like this and you could put it in one line and you could tweak this.” And there's nine different ways to do something. I'm spending most of the time making my code look pretty, versus understanding how my code actually runs, where it's supposed to run. So I think that that changed. I could spend all my life making pretty code, but if I don't understand how it's operating, or that becomes more important if I want to scale my applications.
My punch line here was it feels like Ruby runs like Steve, but then I wrote, feels like scripting language. So I was a former Ruby programmer. I still have a complicated relationship with Ruby. I love Ruby. So readability was paramount to them. Then simple code is debuggable. You can go in the goal line code right now on GitHub and start reading a top down. It's just fun. Hey, I can go and read the HTTP library. I don't need no framework for it. I can go and read how Go channels are made. Even that code you can go and read, “Okay, this is H times track.” “Oh, that's how it works,” so you can actually go and read the code and it's readable. It's top down, you can figure it out. So I love that part. It makes it very accessible for me.
And it can scale on large code bases where you have by large code bases. I'm an engine manager now, I work at an enterprise company. I have 10 developers on my team and then I'll have a couple of interns. I have a couple of RCGs. Rohart is nodding his head, and there'll be maybe a few senior developers, there will be one architect. You're not in the startup world where everybody's a senior developer You're an enterprise company. So somebody coming into your code base needs to be able to read it. It's a very different need than working at a startup where everybody's a senior engineer and has built systems.
Stable
The other thing that enterprises need is stability. Stability of a language. I think this was really one of the big successes of Java, was it was always backward compatible. Every release they came out with, it was backward compatible. They had the deprecated flag that, “Okay, don't use this, but if you use it, it's still going to run. We're going to make sure your code runs.” And that really mattered. I think that's still the big success of Java, and the Go community is paying attention. So Go2 is in the works right now. One of the talks that I attended from Ian Wallace, who's on the Go2 team was like, “We're looking at that, we're looking at the success of C with not that many changes, and the success of Java with not that many changes, and we want stability.” I think that was one of their things too, where they don't add many changes in the language pack. So the focus for the Go team was, “Let's make compiler performance. Let's make runtime faster.” Performance, performance, performance; not waste time on language features.
Who Is Using Go?
A little bit about who's using Go. I think this slide is important because people may know, may not know, that all of these things are written in Go. Docker is written in Go, Kubernetes is written in Go, etcd, [inaudible 00:22:30]. IPFS is an interesting project that I discovered, and it's a good one. Hopefully one day I can contribute to something. It's a fun project and it's written in Go. All of Hashicorp stack is written in Go. Prometheus, OpenCensus, influx DB and so many other ones. Cockroach DB, Victus. Sogou is presenting here somewhere. It's all written In Go.
Also these companies are writing in Go. PayPal has a bit of Go in the infrastructure side right now, mostly. But we have Go, we're talking about Go. I have seen job postings from big companies looking to build Go framework teams very recently in the Bay Area. So I can't name names, but they have also approached me, but I'm not going anywhere. But yes, Go is picking up everywhere. There are going to be go framework teams. There are going to be enterprise companies saying like, “Hey, we're going to write microservices in Go.” So this isn't just about infrastructure. I think it's going to be more, that's my thing.
Design
The design of Go, it's natively compiled, statically typed, open source. I'll say all these things- it's on the slide, but I'll say it again. It's important because I had an interview and someone says, "Oh no, Go, is it really natively combined? There has to be a virtual machine somewhere." They just couldn't believe that it was natively compiled. This was an interview and it was a senior director asking me this question. It was pretty interesting. It has concurrency primitives built in. That's one of the things that people love the most about Go. They'll come, "Oh I came to learn Go because I want to learn about Go's concurrency model." So yes, that's one of the things I talk about, in the next slides.
It's a C-like language. I think I had a lot more slides with code, but the feedback I got from my mentor was don't put code on slides so I removed them. But if you, later on, want to meet me in the hallway entrance, I can share some code. But it's very simple. It reads like C. I mentioned about natively compiled. So this is a question; I think it's an important question. At least this is how I learned, is through FAQs and not by reading. So Go has a really nice language specification. If you are one of those learners who learns from reading the whole specification packets in our forum and all of that, it's there, go read it. I struggled, it was not easy for me, so I jumped around. Why is it so big? It’s because Go binaries include the Go runtime. It's all packaged together. You've got everything in one single binary that's running your Go code.
Compiler
The compiler, it's fast. It's really fast. Early, it was written in C. They ported it to Go, it's still fast. It's not as fast as the C one, but they admit it, but it's still fast. They're focused more on the runtime than the compiler for performance. There's dependency cache and you can do Go Bill Myers, and you can cash your dependencies, statically linked to a single binary. So, for example, if you're linking C libraries, for example, if you're using Kafka and you have Liberty Kafka, you can link it with your Go binary and ship a single binary if you're using the Kafka driver. I did that in a previous project and you can do that. You can dynamically link too, but you can statically link the binary. There are provisions to do that. I put this l slide for cross-compilation there because I feel like people really need to see that it does do all of this cross compilation across all these platforms and oasis.
So that's one of the things, and there's also support for assembly. So you can ride hand-coded assembly, and there's a lot of poll requests that come from academia for the math libraries. If you're watching the Go line GitHub, you'll see that on the math libraries for sign costs down and all of those, a lot of that is in assembly and people are constantly tweaking, people- their domain, is constantly making it better, which is great, right, because we get a better language.
Data Oriented Design
Go is not object-oriented. It's data-oriented. By data-oriented it has struts and it has methods that are outside of struts. They are not related; there is no class keyword in Go. There is no extends keyword in Go. So class extends- we don't have that. We just have struts, like C, and we have methods that are outside of struts. So stayed in behavior is separate because it's based on the data, the strut. Struts do a lot of other stuff too, like Go behind the scene does alignment and all of this stuff to make it compact, very much because they are coming from the C world. I don't talk about it here because there's so much to cover.
So Go uses interfaces to implement polymorphism. Again, like I mentioned, with methods you can implement multiple interfaces. It also prefers composition over inheritance, because we don't do the inheritance tree. Encapsulation is done with three things. You've got the packages. Packages are not like Java namespaces, they're more like libraries. They're completely encapsulated. If you start thinking of them as Java namespaces, it's trouble. If you think of them as libraries, like, “Hey, this needs to live independently off everything else,” then it works. It follows a very simple upper case convention for things that are exposed out of that package and things that are lower case or not. So it's a very simple convention.
There's also this concept of internal, which you don't want it exposed at all, which is sort of protected I think. I don't know which paradigm, but yes, so you wouldn't be able to ask us those portions of a library that you're using. You can embed struts inside Strut. So type reuse is through embedding struts inside struts.
Language Design (continued)
So language design. This is interesting because maybe you took a C-class in college and you have mostly written in Java or mostly written in languages that didn't have pointers, you come to Go on, you're like, "Oh, Go has pointers." It has pointer semantics. These are not your seed pointer arithmetic thing. So don't confuse the two. Pointer semantics is the same thing as Java references. Java is all pointer semantics. Everything is a pointer semantic in Java. Java doesn't have value semantics, and that's something that is a concept that I feel people should tell up front somewhere- maybe I’ll write a blog about it because it was a source of confusion for me too. I'm like, "Oh my God, these stars are scaring me. Pointers," because I haven't used pointers in awhile. It doesn't like that.
Value semantics is really nothing. But if people who came from C++ know this, it's just about copying something instead of using it as a reference. So that's value semantics. Java doesn't have value semantics, but there's a GSR open for the next release of Java to have value semantics. Which is the next one, Java 11 or Java 12? Yes, that's going to have that has a GSR open for adding value semantics. So look at pointer semantics as nothing but references, just like you are in your ordinary code. Use those more than where you need to pass references.
You can do low level stuff with pointers, I played a little bit with it. You can import the unsafe and access things inside that Strut with casting with unsafe. But don't do that unless you have to. So one place where I've seen it is in Go's binary serialization package, GAB. It's called GAB, which is Go's thing for like doing service to service. If you don't want to use GRPC, if you're writing everything in Go, you can use the GAB library, because to make it really efficient, Rob Pike had written it and he was using unsafe that to do all of the manipulation and make it fast. So that's there.
cgo is an option. I have used cgo for image processing in a previous company. That's the only place I needed to use it because there are some things, you’re going to write all of it in Go? For example, like FFmpeg. Who wants to write it in Go? That's a hard library to port to Go. So you end up using cgo for those kinds of things, those libraries that provide those values. The other thing about Go, and I think it was in the previous talk mentioned too, about zero value. Go does that too. When you say new or make or you know any of these things, you're not instantiating a class in Java. All you're doing is you're allocating memory for this variable and you're zeroing out that memory.
So zero value is that you're doing that because this whole zero value concept is interesting. I didn't come from the C word, but the C people are really excited about it. I'm like, "Why are you always talking about is zero value?" I asked Bill. And Bill's like, “Because C had this problem where it would allocate memory but it would not zero it.” So you could do bad things. You could access things inside and mess things up. So Go takes care of that for you. There's a slight performance hit. You would never do this in C, because you're writing firmware in C. You wouldn't want to do anything with performance. But Go cares about data integrity first. So that matters. So things have to be accurate, consistent, efficient. That's more important.
Testing
I'll take a quick break again, sorry. So Go has testing built in. In the toolchain you've got Go test, you've got the benchmarking tests and I had a bunch of code associated with that. If anybody wants to look, I can talk about table-driven tasks and how Go does unit tests. There's also how you can wrap a benchmark test, and test race conditions. I mentioned everything was a reference. So yes, you will run into race conditions. Go has a race detector in the testing toolchain built in, which you can run to detect these kinds of errors. There are also options for coverage, memory profiling, and other things. So it's pretty good. To be honest, I still find it lacking straight up. I think we need a Go testing cookbook. Maybe someone will write a Go testing cookbook that'll tell us how to do tests well. I still love the Java style of doing tests and their ways, so in a very systematic way. Ruby too had great testing frameworks. So I think this is an area where we could do more open source and contribute Go testing frameworks.
Concurrency
Concurrency. There are some really great talks about concurrency in Go. I saw all of them and I still didn't understand concurrency in Go. And I finally get it, so it isn't about running on all of these processors. It's about multiple things. Concurrency is about out of order execution. If I go to a coffee shop and I ask for coffee and I pay the money, but there's someone behind me and there are things that can happen in parallel inside the coffee shop. The things that can happen out of order; I can pay first or somebody's doing something first. It isn't about using all of the machines in the coffee shop for example. You can do certain things out of order, and actually you all should watch the talk- Samir Ajmani has a great talk on this, about concurrency, and he has this coffee example that really explains how concurrency works with channels and goroutines in Go. But it's really about out of order execution.
Go gives you goroutines. They are lightweight userspace threads. Then Go has a cooperative scheduler that's in the Go runtime, that's managing these goroutines for you. The scheduler is deciding which goroutine is going to be run, which goroutine is blocking or whatever, and which goroutine needs the Cisco right now, so I need to go to the OS level. It's managing all of that so you don't have to do that. So it's like a mix. It hits the sweet spot. So node.js had the event loop and was managing all [inaudible 00:36:44] io and working around this, I’m mumbling right now, but it wasn't going to the OS level threads. But Go is doing all of that with this really amazing scheduler.
A lot has been written about the scheduler. There are some great talks. My favorite talk is by Kavya Joshi at Go for con. It's called the scheduler saga. You must watch it. She's amazing. She's my hero. So yes, you must watch that talk, the scheduler saga. It explains how Go handles all of these. There's a lot of literature on it too. A lot of great blog posts.
Channels
Where was I? Yes, this slide. I talked about goroutines. So you've got your threads. I talked about concurrencies, about out of order execution. So now how do I make sure what thing happens when? I need to signal. There are four things happening. There's out of order execution happening, this thing happened, this thing happened, but I need to signal these and bring it all together. So the channels provide the mechanisms that allow the goroutines to communicate. And that comes from this old paper written by Tony H-O-A-R-E- you all can pronounce it in your heads- from 1978. Communicating sequential processes. It's a great paper. That's one of the talks that Go for con was also about, implementing the paper in the original language, which was done by Adrian Crockford. That's also a great talk if you like the academic side of things. That's a great talk to watch.
Channels are signaling semantics. I'm trying to signal what happens first. I'm orchestrating what's happening. This is one of the patterns, and I've just picked one good example where I have a goroutine. In this example, I'm a manager and I'm hiring a new employee, and I want my employee to perform a task immediately when they're hired. Then you need to wait for the result of their work. And so you need to wait because you need some paper from them before you continue. That's what this orchestration is about, and this is the semantics here. So you make a channel, you run a goroutine, and then you're waiting for something to land before you can exit this function.
Tooling
I'll talk a little bit about go toolchain. A lot of people talk about toolchain and Go. It's really, really well written because there's a lot of good tooling built, in especially the Go format, gofmt or whatever. You don't have to talk about tabs and spaces ever, ever again. There's just one way to do formatting of all of your code, and you just do it to Go way, it's in the toolchain, it's there. There's Go LinkedIn, there's Go where, there are other things that the toolchain provides. But it also provides really powerful tools for profiling, memory and CPU and blocking. There are really some really great workshops. We had a workshop for women a couple of weeks ago where we did a Go tooling workshop, where Francesc went through and we worked through all of these examples and wrote code on the Go toolchain.
And then tracing: Go tool trace really helps you when you want to see when garbage collection is happening, how many goroutines are running, the GC pause or whatever, if you're really into all of that. I wasn't into that when I was doing Java, but if you're into that now you can, and now I'm into everything. But it's kind of fun that you can do this very easily and it's built in. You don't have to download a special framework or a library. It's part of Go.
Error Handling
Error handling. Go does not have exceptions. There is no exception hierarchy. If you are like me and have written all this code path, with tons and tons of exceptions, it's like, “Oh my gosh, that's my main program. This is my exception program.” I used to write down like that. You had to, because that was the one way. You had checked exceptions. And I think that Java improved exceptions from C++ and it made them better. But it wasn't like now where we're taking it to the next level. I think with Go, errors are just values. You can do whatever you want with them. You have state and behavior.
Data structures - I'm going to go really, really fast now. Go has arrays, maps and slices, and if anything you should read about slices. Slices is the best thing ever. It just makes Go super fast. It's contiguous chunks of memory. It's not your dynamic array, it's not your array list. I was confused forever. Think of slices as its own thing, and read up on it. Just leave the old paradigms when you start reading up on this. Go has this built in thing for if you want to do a pence and length and all of that. It's in the library. It also has a rich standard library. All of the stuff is there. And you can read on it. I'm going to go really fast.
API development. This is at the heart of everything. I have written swagger and lots of Go swagger stuff. I have written REST stuff, web frameworks are there. GRPC. This is a quick example of chaining middleware and I put this in there because this is something that comes up that you have to do. I'm going to go much faster. This is a really important slide. So seven minutes. You should read up about the Go network polar. It's really, really amazing. This is why Go's network library is amazing. That's why Cisco wants to use it and all these networking companies want to use it. It just does these amazing things. There are great blog posts on it. Underneath it just abstracts all the different operating systems. The community- we have Gopher slack, Golang mailing lists, Global conferences, WomenWhoGo.
The Future of Go
The future of Go. There's stuff happening with Go 2. Generics is something that people miss, improved error handling, dependency management is something that the Go team is working on. And my last slide is for people who are writing Go frameworks. This is my pitch: please, please, please build for developer happiness. Don't build top-down frameworks that make your developers unhappy. Build tooling that helps give developer autonomy.
These are the learning resources for Go that I use. My last slide is why I like Go. I feel like I'm a programmer again. I was writing business logic forever. I've lived in API land, and I feel like I can finally see what's happening. I'm attached to what's happening with systems and I'm very excited about learning more and doing more. There's a great blog post that I would recommend, that's called Go has Heralded the future for Rustin other systems languages. It's an amazing post. Go has all these problems. It's an easy-to-read blog post where there are pros and cons. But yes, this is where we're going. This is what's happening with the changes and things that are coming with Docker and containers and at Go. So thank you all for listening.
Questions & Answers
Participant 1: If Go does not have interfaces, what do you create that implements interfaces?
Parikh: Go has interfaces.
Participant 1: Yes, Go has interfaces. But I just think an interface is something that a class kind of uses, right?
Parikh: Yes. No. You have interfaces and then you have methods. When it's implicit, if you implement those methods that the interface wants, then you have your implementation. So I'll tell you what ...
Participant 1: Where are those methods? Because that to me is a class.
Parikh: It's not a class. It's methods on the struts. I'll show you. I'll show you after. I can show you some code.
Moderator: We have something similar in Rust. So I know this and it is complicated to figure it out.
Parikh: It is. It's way hard. I was like, “Oh my God, where are my classes. I was the same way.
Participant 2: A question about since the stock was at an enterprise applications, I just want to know what does Go support for working with databases?
Parikh: Yes. There is a database in the standard library, but there are also these abstractions that people have built on top of that, that you can sequel acts and all of that that you could use to streamline. You could use ORMs if you want. In one project I used ORMs and then the other project was a gaming company, high performance. And the CTO made us write all sequel by hand, which is a good thing. But yes, you could do both.
Participant 3: I was just wondering if it is a good prototyping language. I have a lot of systems engineers that just come up with some quick ideas.
Parikh: CLI is fun stuff. I want to teach my daughter Go this summer, because I want her to learn. She's learned all this Java, but prototyping is perfect for that so you can see what's happening.
See more presentations with transcripts
Community comments | https://www.infoq.com/presentations/go-lang-design?itm_source=infoq&itm_medium=popular_widget&itm_campaign=popular_content_list&itm_content= | CC-MAIN-2019-18 | refinedweb | 7,288 | 76.32 |
D400 Unity plugin broken on OSX?RGBDude Feb 9, 2018 12:06 AM
I just downloaded the latest version of the SDK from Github and I'm getting this error several times:
error CS0246: The type or namespace name `Intel' could not be found. Are you missing an assembly reference?
Does anyone have any idea why the plugin isn't working out of the box? Latest version of Unity, it's an empty scene and the D435 is plugged in.
1. Re: D400 Unity plugin broken on OSX?MartyG Feb 9, 2018 12:24 AM (in response to RGBDude)
May I please confirm with you if the Unity plugin files have been installed in your Unity project and in the correct locations? On my Windows installation of the SDK (I do not have a Mac to check the location on), the files are located in this SDK 2.0 folder:
Intel RealSense SDK 2.0 > bin > x64
The plugin files that Unity needs are called LibrealsenseWrapper.dll and realsense2.dll.
You should go to your Unity 'Assets' panel, locate the folders 'Plugins' and 'Plugins,Managed' and do the following:
1. Drag and drop the realsense2.dll file from its original SDK folder into Unity's Plugins folder
2. Drag and drop the LibrealsenseWrapper.dll file from its original SDK folder into Unity's Plugins.Managed folder
After these steps are completed, you can then open a Unity scene.
2. Re: D400 Unity plugin broken on OSX?RGBDude Feb 9, 2018 12:59 AM (in response to MartyG)
I download the SDK 2.0 off of Github, but the Unity project does not have the 'Plugins' or 'Plugins.Managed' folders. I don't see the bin folder anywhere, and looking through the github I don't see it anywhere there either.
Why would Intel post incomplete code like this?
3. Re: D400 Unity plugin broken on OSX?MartyG Feb 9, 2018 1:17 AM (in response to RGBDude)
As I do not have a Mac, I cannot check the folder structure of the SDK installation on your machine. Would it be possible to put the name of one of the files into a search box on your SDK folder and find the files that way?
4. Re: D400 Unity plugin broken on OSX?RGBDude Feb 9, 2018 9:07 AM (in response to MartyG)
The folder structure is here: GitHub - IntelRealSense/librealsense: Intel® RealSense™ SDK
It's not there when I search my computer or the repository.
5. Re: D400 Unity plugin broken on OSX?MartyG Feb 9, 2018 10:40 AM (in response to RGBDude)
Apologies for the delayed response. I was thoroughly researching your case.
It seems that you get the Realsense2.dll file by building the RealSense SDK 2.0 source, and then get the LibrealsenseWrapper.dll file by going to the wrappers/charp section of the SDK structure and building the file Intel.RealSense.SDK.sln
The Wrappers > Csharp section of the SDK structure:
librealsense/wrappers/csharp at master · IntelRealSense/librealsense · GitHub
At this point there is a problem though. It says that the solution (sln) file only currently works with Windows. So I am not certain how you would gain that file with a Mac. The best I can think of is to attach the two DLL files to this message, which I have done, so you can try importing them into the Plugins and Plugins.Managed folders in Unity.
Regarding not having Plugins / Plugins.Managed folders in your Unity project: I started a new project to check this and confirmed that a new project did not have these folders. So simply create these folders in your Assets windows and place the DLLs in the appropriate folder according to the instructions.
- realsense2.dll.zip 2.3 MB
- LibrealsenseWrapper.dll.zip 24.1 K
6. Re: D400 Unity plugin broken on OSX?RGBDude Feb 9, 2018 10:55 AM (in response to MartyG)
That is fantastic, I appreciate all of your help. Do you work for Intel?
It still perplexes me that these files are not already included in the Unity project. Why don't Intel just include a .unitypackage file like they did with the old realsense SDK, and like every other SDK developer does?
It seems needlessly complicated and will likely turn away many developers.
Thank you again.
7. Re: D400 Unity plugin broken on OSX?RGBDude Feb 9, 2018 11:02 AM (in response to MartyG)
Unfortunately it still does not work on OSX.
DllNotFoundException: realsense2
Intel.RealSense.Colorizer..ctor ()
ColorizeDepth..ctor () (at Assets/RealSenseSDK2.0/Scripts/ColorizeDepth.cs:9)
LibrealsenseWrapper.dll has support for OSX x64, but realsense2.dll is Windows only.
8. Re: D400 Unity plugin broken on OSX?MartyG Feb 9, 2018 11:09 AM (in response to RGBDude)
I'm glad I could be of help. I am not an Intel employee, I am a volunteer support guy known as a 'Super User'.
Yes, a pre-made package file in the style of the old Unity Toolkit executable would be very useful. As it would have to be updated with a new Realsense2.dll every time a new SDK build was released though in order to reflect the latest changes in the SDK, this may be more trouble than when the old Windows SDKs were updated on a roughly 6 month schedule. The simplest approach may be to put download links for the latest versions of these two DLLs on a web page, since Intel will have to build the files anyway for the binary version of the SDK.
I just saw your latest message. It is a positive at least that LibrealsenseWrapper.dll is the one that works with Unity, as that is the one that is potentially the most complicated to build for Mac. You should be able to obtain realsense2.dll in theory just by building the SDK 2.0 on yur Mac. Whether that Mac-built DLL will work in Unity, I cannot say. | https://communities.intel.com/thread/122475 | CC-MAIN-2018-09 | refinedweb | 998 | 75.4 |
It's 9menu so right-click behaves like Openbox root menu. The C code works I just don't know how to include this in dwm.
#include <stdio.h> #include <stdlib.h> #define MENUSCRIPT "\ #!/bin/sh \n\ 9menu -teleport -warp -popup -bg \"#101010\" -fg \"#c0c0c0\" -font \"-*-dejavu sans mono-medium-r-*-*-17-*-*-*-*-*-*-*\" \ \"Run\":\"dmenu_run -b -fn 'DejaVu Sans Mono-15' -nb '#101010' -nf '#c0c0c0' -sb '#c0c0c0' -sf '#101010' &\" \ \"Ranger\":\"xterm -e ranger &\" \ \"Firefox\":\"firefox &\" \ \"Geany\":\"geany &\" \ \"Terminal\":\"xterm &\" \ ----------: \ \"Exit\":exit \n\ " int main() { system(MENUSCRIPT); return 0; }
Offline
Calling System() to execute a shell to just in turn run a different binary is ridiculous. Just run 9menu with it's parameters through dwm's built in exec.
static const char *menucmd[] = { "9menu", "-teleport", "-warp", ... , NULL };
Fill in the dots with all your other parameters for 9menu.
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Pretty sure I've seen this done, but is there a patch that will remove the window title from the statusbar, or will that require some editing of dwm.c?
Offline
Pretty sure I've seen this done, but is there a patch that will remove the window title from the statusbar, or will that require some editing of dwm.c?
Although it's old and may no longer apply cleanly, this might be what you want. It will at least point you to the relevant lines of code.
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
Thanks; that's prpbably where I origianlly saw the patch.
Edit - and you're right, it no longer applies cleanly.
Last edited by PackRat (2017-03-06 01:57:10)
Offline
Hi, I've tried a few of WMs (i3wm, dwm, awesomeWM), I think I'm mostly satisfied with dwm.. I use pertag, tilegap and movestack patches from official site... But there's a thing that bothers me most... It's that I'm unable to change the height of the windows, I currently don't have a much time to look into it, but I've searched before and haven't managed to find patch like that... Has anyone tried that or seen patch for this? Thanks for anyone's reply
Offline
matej, can you clarify what you want? How do you want to change the height of the windows: manually, or change a default height? Tilers generally have windows fill the available space within their tiled region, so I'm not really clear what changing the height would mean unless you want to leave a gap at the top or bottom of the screen. Alternatively, do you mean manually change window height either with mouse resizing or a keybinding?
EDIT: oops, in hindsight there is a meaning that seems more likely, but if you could confirm this is what you want it'd still be helpful: do you mean within the stack (e.g. windows on the right half of the screen in rstack mode) change the portion of the vertical height given to each window?
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
that edit
i don't want gaps, i just want to be able change height of a window in the stack
EDIT:
maybe this? i need to look into it, still last versin 6.1 so it's possible i'll need to modify it a bit
EDIT #2:
there's also mistake in the patch,,, the height of stack(?) tiles must be multiplied instead of divided as it's in the original source (it has been changed for master tiles, but not for stack)
Last edited by matej.focko (2017-03-14 22:05:45)
Offline
Perhaps I'm not using the right search terms but would anyone point me to the right direction, if it exists, to finding a patch. I currently use the warp patch, which focus window where mouse cursor is hovering. This goes against what I need for one type of work flow.
I download pdf in web browser (master) and open the pdf. It opens zathura in the client side. I press cntrl+p to open the floating print menu. I press enter to let it print. The float window is over the master side so when the floating print pop up goes away, focus is now on master, which is the web browser. Is there any patch that can focus on last active window when float windows are closed?
I'd hate to give up the warp patch as in other work flows, this is very handy. Anyone else come up against this situation? Any guidance always appreciated.
Edit: Another workaround would be if anyone can suggest a way to have floating windows show up on top right of screen (first client ) that would work too.
Last edited by frank604 (2017-03-22 23:36:51)
Offline
Can anyone help, I have the following code:
static const char *dmenucmd[] = { "j4-dmenu-desktop", "--dmenu", "dmenu -i -fn '-*-terminus-medium-r-*-*-12-*-*-*-*-*-*-*' -nb '#222222' -sb '#005577'", "--no-generic", "--term=urxvt", NULL };
the problem is I want -fn, -nb, -sb to have value from font, normbgcolor, selbgcolor, but if I change it to the following:
static const char *dmenucmd[] = { "j4-dmenu-desktop", "--dmenu", "dmenu -i -fn", font, "-nb", normbgcolor, "-sb", selbgcolor, "--no-generic", "--term=urxvt", NULL };
it won't work because "dmenu -i -fn '-*-terminus-medium-r-*-*-12-*-*-*-*-*-*-*' -nb '#222222' -sb '#005577'" must be in one string.
I don't even know C, so I hope you understand what I'm saying.
Thanks in advance.
Offline
Your best best would be to use preprocessor macros:
#define FONT "-*-terminus-medium-r-*-*-12-*-*-*-*-*-*-*" #define NB "#222222" #define SB "#005577" ... static const char *font = FONT ... static const char *dmenucmd[] = { "j4-dmenu-desktop", "--dmenu", "dmenu -i -fn '" FONT "' -nb '" NB "' -sb ...
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Thanks a lot.
Another question, is there a way to make an application that can't remember its own windows placement (such as terminal, gvim, etc) to be placed on the center of the screen when it's started? By default it's started on the corner near tags, it doesn't affect application like Firefox, though but it's quite irritating, requires me to move it to the center of the screen.
I'm in floating mode, BTW.
Offline
I suspect the following diff to dwm.c (based on dwm-git) should do that:
1048,1051c1048,1049 <); --- > c->x = c->mon->mx + (c->mon->mw - c->w) / 2; > c->y = c->mon->my + (c->mon->mh - c->h) / 2;
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
@Trilby: it appears that everything is forced to be placed in the center (e.g. GIMP).
Without those patch, GIMP can remember the placement, but after patched, GIMP lost its placement.
Before:
After:
Last edited by zanculmarktum (2017-06-14 12:54:23)
Offline
True, you can adapt my approach. Just check if c->x and c->y are zero before centering. Then any client that specifies a (non-zero) position would be placed as requested, but any client that doesn't specify coordinates would be centered. Of course this would fail on clients that actually wanted 0,0 coordinates, but there's really no good way around that.
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Something like this?
if c->x == 0 and c->y { c->x = c->mon->mx + (c->mon->mw - c->w) / 2; c->y = c->mon->my + (c->mon->mh - c->h) / 2; }
Please fix, I don't know C, lol.
Offline.
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
How can I set this command in config.h so later assign it to print key?
maim ~/Pictures/Screenshot-$(date -Iseconds | cut -d'+' -f1).png
Thanks!
Ps: Maim is yet another Scrot like screenshot tool!
Last edited by oldgaro (2017-09-19 19:25:48)
Offline
You could save this command line as a script in e.g. /usr/local/bin, name it e.g. "screenshot" , then assign the print key to the executable like this:
[...] static const char *screenshot[] = { "/usr/local/bin/screenshot", NULL }; [...] { 0, XK_Print, spawn, {.v = screenshot } }, [...]
Offline
What? Why create a script. Just use that command passed to a shell - or just use the macro defined specifically for this purpose:
{ 0, XK_Print, swan, SHCMD("maim ~/Pictures/Screenshot-$(date -Iseconds | cut -d'+' -f1).png") },
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Thanks!
#define SHCMD(cmd) { .v = (const char*[]){ "/bin/sh", "-c", cmd, NULL } }
{ 0, XK_Print, spawn, SHCMD("maim ~/Pictures/Screenshot-$(date -Iseconds | cut -d'+' -f1).png") },
Last edited by oldgaro (2017-09-21 00:59:50)
Offline
This is of course the better solution
I just prefer scripts, because I can change them anytime without the need to recompile anything.
Last edited by sekret (2017-09-21 08:05:57)
Offline
This is of course the better solution
I just prefer scripts, because I can change them anytime without the need to recompile anything.I just prefer scripts, because I can change them anytime without the need to recompile anything.
Interesting! I somewhat change my scripts sometimes...I will keep your solution commented in config.h for later use!..haha
Offline
Does anyone know why St do not fill the entire screen?
Same happens with Spacemacs, urxvt but not with firefox, zathura,steam... … nfig.def.h
thanks!
PS: resizehint to 0 did not make any effect!
Last edited by oldgaro (2017-09-23 23:02:33)
Offline
Please read the wiki before posting here: … al_windows
Also, this is a hacking, not a general support, thread.
Arch + dwm • Mercurial repos • Surfraw
Registered Linux User #482438
Offline | https://bbs.archlinux.org/viewtopic.php?id=92895&p=61 | CC-MAIN-2019-43 | refinedweb | 1,659 | 72.26 |
This is so great! This is a great board, a lot of helpful people here and I look forward to learning a lot here. I just finished "Teach Yourself C++ in 21 Days", so I'm not a complete newb, but I definately need to learn a lot more. So everyone prepare for my MANY MANY questions!
Heres a question to get things rolling
The only thing I couldn't get to work is for this program to use integers it receives from the command line. How can I get this to work? Thanks.The only thing I couldn't get to work is for this program to use integers it receives from the command line. How can I get this to work? Thanks.Code:
// Triangle computation program
#include <iostream>
using namespace std;
int main(int argc, int *argv[])
{
int a, b, c;
//cout << argch[0];
cout << "\n";
cout << argv[1];
cout << argv[2];
cout << argv[3];
if (argc > 4 || (argc < 4 && argc > 1))
{
cout << "Invalid number of sides.\n";
cout << "Implementation: <filename> sideA sideB sideC\n";
}
if (argc == 4)
{
a = argv[1];
b = argv[2];
c = argv[3];
cout << "Calculating based on given arguments.\n";
}
if(argc == 1)
{
cout << "Enter side 1: ";
cin >> a;
cout << "Enter side 2: ";
cin >> b;
cout << "Enter side 3: ";
cin >> c;
cout << "Calculating...\n";
}
if ( a == b && a == c && b == c )
cout << "This is an equilateral Triangle.\n\b";
if ( (a == b && a !=c) ||
(a == c && a !=b) ||
(b == c && b !=a) )
cout << "This is an isocles Triangle.\n\b";
if ( ((a*a)+(b*b)== (c*c)) ||
((b*b)+(c*c)== (a*a)) ||
((c*c)+(a*a)== (b*b)) )
cout << "This is a right triangle.\n\b";
else
cout << "Please check the numbers and try again!\n";
return 0;
}
-Peace | http://cboard.cprogramming.com/cplusplus-programming/45200-int-arguments-printable-thread.html | CC-MAIN-2015-35 | refinedweb | 296 | 83.05 |
4986/when-and-how-to-use-super-keyword-in-java
The super() keyword is used to call parent class constructor. It is also used to call methods from parent class.
ArrayList is what you want. LinkedList is almost always a ...READ MORE
keytool does not provide such basic functionality ...READ MORE
Whenever you require to explore the constructor ...READ MORE
A static method has two main purposes:
For ...READ MORE
Yes, an abstract class can have a ...READ MORE
public class Cons {
public static Cons ...READ MORE
To work on timer in java, you ...READ MORE
It means that there is only one ...READ MORE
String fooString1 = new String("foo");
String fooString2 = ...READ MORE
new BigDecimal(String.valueOf(double)).setScale(yourScale, BigDecimal.ROUND_HALF_UP);
will get you a BigDecimal. To ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/4986/when-and-how-to-use-super-keyword-in-java | CC-MAIN-2021-39 | refinedweb | 155 | 70.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.