text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
std::strtof, std::strtod, std::strtold
Interprets a floating point value in a byte string pointed to by digits, Latin letters, and underscores. The result is a quiet NaN floating-point value.
The functions sets the pointer pointed to by
str_end to point to the character past the last character interpreted. If
str_end is NULL, it is ignored.
[edit] Parameters
[edit] Return value
Floating point value corresponding to the contents of
str on success. If the converted value falls out of range of corresponding return type, range error occurs and HUGE_VAL, HUGE_VALF or HUGE_VALL is returned. If no conversion can be performed, 0 is returned and *str_end is set to str.
[edit] Example
#include <iostream> #include <string> #include <cerrno> #include <cstdlib> int main() { const char* p = "111.11 -2.22 0X1.BC70A3D70A3D7P+6 1.18973e+4932zzz"; char* end; std::cout << "Parsing \"" << p << "\":\n"; for (double f = std::strtod(p, &end); p != end; f = std::strtod(p, &end)) { std::cout << "'" << std::string(p, end-p) << "' -> "; p = end; if (errno == ERANGE){ std::cout << "range error, got "; errno = 0; } std::cout << f << '\n'; } }
Output:
Parsing "111.11 -2.22 0X1.BC70A3D70A3D7P+6 1.18973e+4932zzz": '111.11' -> 111.11 ' -2.22' -> -2.22 ' 0X1.BC70A3D70A3D7P+6' -> 111.11 ' 1.18973e+4932' -> range error, got inf | https://en.cppreference.com/w/cpp/string/byte/strtof | CC-MAIN-2018-34 | refinedweb | 212 | 63.7 |
Red Hat Bugzilla – Bug 58308
Rawhide glibc is hosed
Last modified: 2016-11-24 10:00:31 EST
I had these installed:
glibc-common-2.2.4-20
glibc-2.2.4-20
glibc-profile-2.2.4-20
glibc-devel-2.2.4-20
I attempted to upgrade to these:
glibc-2.2.90-3.i386.rpm glibc-debug-static-2.2.90-3.i386.rpm
glibc-common-2.2.90-3.i386.rpm glibc-devel-2.2.90-3.i386.rpm
glibc-debug-2.2.90-3.i686.rpm glibc-profile-2.2.90-3.i386.rpm
When I rebooted after doing so, loadkeys segfaulted while loading my key table,
depmod segfaulted while trying to update kernel dependencies, and swapon hung.
Reverting to the previous glibc packages made the problems go away.
Yuck.
Works just fine for me. Why are you updating to i386 glibc BTW? Were you using
i686 one previously?
I was upgrading to the i386 package because the script that I use to tell me
when new Rawhide packages are available wasn't properly handling the case of a
package with support for multiple architectures. In any case, I tried the i686
package, and it has the same problem. Additional details....
* Here's what gdb says when I run it on loadkeys and the core file after the
coredump:
GNU gdb Red Hat Linux (5.1 `loadkeys us'.400de422 in versionsort () from /lib/libc.so.6
(gdb) where
#0 0x400de422 in versionsort () from /lib/libc.so.6
#1 0x400dde0c in readdir () from /lib/libc.so.6
#2 0x0804e997 in strcpy ()
#3 0x0804e6ef in strcpy ()
#4 0x0804e6ef in strcpy ()
#5 0x0804e6ef in strcpy ()
#6 0x0804ed42 in strcpy ()
#7 0x0804d62b in strcpy ()
#8 0x0804a330 in strcpy ()
#9 0x08049f73 in strcpy ()
#10 0x40041158 in __libc_start_main () from /lib/libc.so.6
(gdb)
I suspect that the stack above is not completely "honest", but the "versionsort"
part of it appears to be consistent -- I see that in the strack traces of all
the processes that are coredumping.
* I tried to use glibc-debug to debug this further, but I was unable to get the
system to recognize that there are valid shared libraries in /usr/lib/debug.
Setting LD_LIBRARY_PATH to /usr/lib/debug had no effect whatsoever. Replacing
/lib/libc-2.2.90.so with /usr/lib/debug/libc-2.2.90.so didn't work either -- I
got "sh: error while loading shared libraries: libc.so.6: cannot open shared
object file: No such file or directory" when trying to run anything afterwards.
Are you sure the libraries in the glibc-debug package are OK?
This is very weird at least.
/usr/lib/debug works just fine for me:
$ gdb /tmp/test
GNU gdb 2001-11-07) set env LD_LIBRARY_PATH=/usr/lib/debug/
(gdb) b main
Breakpoint 1 at 0x8048409: file /tmp/test.c, line 17.
(gdb) r
Starting program: /tmp/test
Breakpoint 1, main () at /tmp/test.c:17
17 exit (0);
(gdb) l strcpy
26 ../sysdeps/generic/strcpy.c: No such file or directory.
in ../sysdeps/generic/strcpy.c
(gdb) set env LD_LIBRARY_PATH=
Setting environment variable "LD_LIBRARY_PATH" to null value.
(gdb) r
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /tmp/test
Breakpoint 1, main () at /tmp/test.c:17
17 exit (0);
(gdb) l strcpy
No line number known for strcpy.
(gdb) quit
The program is running. Exit anyway? (y or n) y
(as you can see, if I use /usr/lib/debug libraries, they have debugging info,
if not, there is no debugging info for glibc).
The above details are with glibc-2.2.90-3.i386.rpm, right?
Otherwise the library in question would be /lib/i686/libc.so.6, not /lib/libc.so.6.
versionsort is never called from readdir, so the backtrace is obviously bogus.
I wonder if perhaps you have a newer version of ld.so and there's an
incompatibility. I have ld.so-1.9.5-13.
The above details were with the i686 package, not the i386 package. note that
it installs files both in /lib and /lib/i686. Note, furthermore, that when I
ran "ldd" on a random binary after installing the i686 package, it listed the
libc in /lib, not the one in /lib/i686.
The dynamic linker you use is in glibc-2.2.90* rpm, ld.so-1.9.5-13 is just
for libc 5 programs.
By any chance, are you running a 2.2 kernel? glibc-debug is not compatible with
that (and /lib/i686/libc.so.6 is neither).
Yes, I'm using a 2.2 kernel.
I really, really hope that you're not planning on releasing a glibc version that
isn't compatible with 2.2 kernels. Surely there is a way to make it backward
compatible. There are people, including me, who *can't* use 2.4.x kernels
because they f**k up our IDE drives. I'd love to see this fixed, and I've been
trying to help the people who know something about kernel IDE support to fix it,
but in the meantime, please don't make it impossible for people like me to keep
our systems up-to-date by releasing a glibc that won't work with 2.2!
Well, nothing changed in this regard since 7.1 days:
in glibc-2.2.4-NN if you install i686.rpm, you get both /lib/libc.so.6 and
/lib/i686/libc.so.6. The former works with 2.2 kernels and was stripped,
the latter works with 2.4 kernels only and was not stripped.
In glibc-2.2.90 currently just /lib/i686/libc.so.6 is stripped too (but
using strip -g, not strip, so .symtab/.strtab are kept) and the unstripped
library is in /usr/lib/debug.
Now, to back to the original issue:
can you do strace and ltrace of loadkeys to see what is it really doing?
Created attachment 42455 [details]
strace from "loadkeys us" with glibc 2.2.90 and 2.2.x kernel
Created attachment 42456 [details]
ltrace from "loadkeys us" with glibc 2.2.90 and 2.2.x kernel
It looks like you were right -- I don't have this problem when I run the new
glibc with kernel 2.4.x, so it is a kernel incompatibility issue. But note that
I *also* don't have the problem with 2.2.x with glibc-2.2.4-20.i686.rpm
installed, so I'm not convinced that you're right that there have been
incompatibilities since 7.1. I think this is new and needs to be fixed.
As I said, I'm not aware of any change in readdir/getdents which might cause
this. Well, it might be that this function is miscompiled with gcc-3.1.
Can you please try:
#include <dirent.h>
#include <stdlib.h>
int main (void)
{
DIR *dir = opendir("/lib/kbd/keymaps");
if (dir == NULL)
exit (1);
readdir (dir);
exit (0);
}
if it crashes too?
If yes, does the same thing statically linked crash as well?
Ok, I checked __getdents disassembly carefully and it is a reload bug, which
Richard Henderson fixed 5 days ago:
I'll put this into gcc soonish and once glibc will be rebuilt with that, it
should work fine again.
This has been fixed in gcc-3.1-0.18 and likely glibc-2.2.90-4. | https://bugzilla.redhat.com/show_bug.cgi?id=58308 | CC-MAIN-2016-50 | refinedweb | 1,246 | 76.82 |
README
NOVEMBERIZING DOSBOX PORTED TO EMSCRIPTENNOVEMBERIZING DOSBOX PORTED TO EMSCRIPTEN
"dosbox" is distributed under the GNU General Public License. See the COPYING file for more information.
DOSBOX, EMSCRIPTEN
BUILDBUILD
$ npm run build
INSTALLINSTALL
$ npm install --save @novemberizing/dosbox
SAMPLESAMPLE
import dosbox from '@novemberizing/dosbox'; dosbox.run(null) .then(o => console.log(o)) .catch(e => console.log(e));
$ cd dosbox-0.74-3 $ LIBS="-lidbfs.js" CXXFLAGS="-g -O3" LDFLAGS="-s TOTAL_MEMORY=67108864 -s ERROR_ON_UNDEFINED_SYMBOLS=0 -s ASYNCIFY -s ASSERTIONS=1 -s DISABLE_EXCEPTION_CATCHING=0 -s EXPORT_ES6=1 -s MODULARIZE=1 -s EXPORTED_RUNTIME_METHODS=['FS']" emconfigure ./configure --bindir=${PWD}/../src --datarootdir=${PWD}/../build --host=`./config.guess` --with-zip=${PWD}/../../zip $ emmake make clean $ emmake make $ emmake make install
"prebuild": "cd dosbox-0.74-3 &&
emconfigure ./configure --bindir=${PWD}/../src --datarootdir=${PWD}/../build --host=
./config.guess
&&
emmake make clean && emmake make && emmake make install",
DOSBox ported to Emscripten About DOSBox is an open source DOS emulator designed for running old games. Emscripten compiles C/C++ code to JavaScript. This is a version of DOSBox which can be compiled with Emscripten to run in a web browser. It allows running old DOS games and other DOS programs in a web browser.
DOSBox is distributed under the GNU General Public License. See the COPYING file for more information.
Status Em-DOSBox runs most games successfully in web browsers. Although DOSBox has not been fully re-structured for running as an Emscripten main loop, most functionality is available thanks to emterpreter sync. A few programs can still run into problems due to paging exceptions.
Other issues Game save files are written into the Emscripten file system, which is by default an in-memory file system. Saved games will be lost when you close the web page. Compiling in Windows is not supported. The build process requires a Unix-like environment due to use of GNU Autotools. See Emscripten issue 2208. Emscripten issue 1909 used to make large switch statements highly inefficient. It seems fixed now, but V8 JavaScript Engine issue 2275 prevents large switch statements from being optimized. Because of this, the simple, normal and prefetch cores are automatically transformed. Case statements for x86 instructions become functions, and an array of function pointers is used instead of the switch statements. The --enable-funarray configure option controls this and defaults to yes. The same origin policy prevents access to data files when running via a file:// URL in some browsers. Use a web server such as python -m SimpleHTTPServer instead. In Firefox, ensure that dom.max_script_run_time is set to a reasonable value that will allow you to regain control in case of a hang. Firefox may use huge amounts of memory when starting asm.js builds which have not been minified. The FPU code uses doubles and does not provide full 80 bit precision. DOSBox can only give full precision when running on an x86 CPU. Compiling Use the latest stable version of Emscripten (from the master branch). For more information see the the Emscripten installation instructions. Em-DOSBox depends on bug fixes and new features found in recent versions of Emscripten. Some Linux distributions have packages with old versions, which should not be used.
First, create ./configure by running ./autogen.sh. Then configure with emconfigure ./configure and build with make. This will create src/dosbox.js which contains DOSBox and src/dosbox.html, a web page for use as a template by the packager. These cannot be used as-is. You need to provide DOSBox with files to run and command line arguments for running them.
This branch supports SDL 2 and uses it by default. Emscripten will automatically fetch SDL 2 from Emscripten Ports and build it. Use of make -j to speed up compilation by running multiple Emscripten processes in parallel may break this. Once SDL 2 has been built by Emscripten, you can use make -j. To use a different pre-built copy of Emscripten SDL 2, specify a path as in emconfigure ./configure --with-sdl2=/path/to/SDL-emscripten. To use SDL 1, give a --with-sdl2=no or --without-sdl2 argument to ./configure.
Emscripten emterpreter sync is used by default. This enables more DOSBox features to work, but requires an Emscripten version after 1.29.4 and may cause a small performance penalty. To disable use of emterpreter sync, add the --disable-sync argument to ./configure. When sync is used, the Emscripten memory initialization file is enabled, which means dosbox.html.mem needs to be in the same folder as dosbox.js. The memory initialization file is large. Serve it in compressed format to save bandwidth.
WebAssembly WebAssembly is a binary format which is meant to be a more efficient replacement for asm.js. To build to WebAssembly instead of asm.js, the -s WASM=1 option needs to be added to the final emcc link command. There is no need to rebuild the object files. You can do this using the --enable-wasm option when running ./configure. Since WebAssembly is still under very active development, use the latest incoming Emscripten and browser development builds like Firefox Nightly and Chrome Canary.
Building with WebAssembly changes the dosbox.html file. Files packaged using a dosbox.html without WebAssembly will not work with a WebAssembly build.
Running DOS Programs To run DOS programs, you need to provide a suitable web page, load files into the Emscripten file system, and give command line arguments to DOSBox. The simplest method is by using the included packager tools.
The normal packager tool is src/packager.py, which runs the Emscripten packager. It requires dosbox.html, which is created when building Em-DOSBox. If you do not have Emscripten installed, you need to use src/repackager.py instead. Any packager or repackager HTML output file can be used as a template for the repackager. Name it template.html and put it in the same directory as repackager.py.
The following instructions assume use of the normal packager. If using repackager, replace packager.py with repackager.py. You need Python 2 to run either packager.
If you have a single DOS executable such as Gwbasic.exe, place it in the same src directory as packager.py and package it using:
./packager.py gwbasic Gwbasic.exe
This creates gwbasic.html and gwbasic.data. Placing those in the same directory as dosbox.js and viewing gwbasic.html will run the program in a web browser:
Some browsers have a same origin policy that prevents access to the required data files while using file:// URLs. To get around this you can use Python's built in Really Simple HTTP Server and point the browser to.
python -m SimpleHTTPServer 8000
If you need to package a collection of DOS files. Place all the files in a single directory and package that directory with the executable specified. For example, if Major Stryker's files are in the subdirectory src/major_stryker and it's launched using STRYKER.EXE you would package it using:
./packager.py stryker major_stryker STRYKER.EXE
Again, place the created stryker.html and stryker.data files in the same directory as dosbox.js and view stryker.html to run the game in browser.
You can also include a DOSBox configuration file that will be acknowledged by the emulator to modify any speed, audio or graphic settings. Simply include a dosbox.conf text file in the package directory before you run ./packager.py.
To attempt to run Major Stryker in CGA graphics mode, you would create the configuration file /src/major_stryker/dosbox.conf and include this body of text:
[dosbox] machine=cga Then package it using:
./packager.py stryker-cga major_stryker STRYKER.EXE
Credits Most of the credit belongs to the DOSBox crew. They created DOSBox and made it compatible with a wide variety of DOS games. Ismail Khatib got DOSBox to compile with Emscripten, but didn't get it to work. Boris Gjenero started with that and got it to work. Then, Boris re-implemented Ismail's changes a cleaner way, fixed issues and improved performance to make many games usable in web browsers. Meanwhile, Alon Zakai quickly fixed Emscripten bugs which were encountered and added helpful Emscripten features.
npm run prebuild
BUILDBUILD
CXXFLAGS="-g -O3" LDFLAGS="-s TOTAL_MEMORY=67108864 -s ERROR_ON_UNDEFINED_SYMBOLS=0 -s ASYNCIFY -s ASSERTIONS=1 -s DISABLE_EXCEPTION_CATCHING=0 -lidbfs.js -s EXTRA_EXPORTED_RUNTIME_METHODS=['FS']" emconfigure ./configure emmake make em++ -g -O3 -mno-ms-bitfields -s TOTAL_MEMORY=67108864 -s ERROR_ON_UNDEFINED_SYMBOLS=0 -s ASYNCIFY -s ASSERTIONS=1 -s DISABLE_EXCEPTION_CATCHING=0 -o dosbox.html dosboxX11 -lGL docker run -it --rm --name dosbox -v ${PWD}:/usr/local/apache2/htdocs -p 80:80 httpd
BUILDBUILD
CXXFLAGS="-g -O3" LDFLAGS="-s TOTAL_MEMORY=67108864 -s ERROR_ON_UNDEFINED_SYMBOLS=0 -s ASYNCIFY -s ASSERTIONS=1 -s DISABLE_EXCEPTION_CATCHING=0 -lidbfs.js -s EXPORTED_RUNTIME_METHODS=['FS'] -s ENVIRONMENT=web -s FILESYSTEM=0" emconfigure ./configure --bindir=${PWD}/../docs --datarootdir=${PWD}/../build --host=`./config.guess` emmake make emmake make install
BUILD ENVIRONMENTBUILD ENVIRONMENT
sudo apt install git sudo apt install build-essential sudo apt install automake autoconf sudo apt install python sudo apt install python3
BUILD ENVIRONMENT FOR WINDOWS (USING WINDOWS SUBSYSTEM - UBUNTU 20.04)BUILD ENVIRONMENT FOR WINDOWS (USING WINDOWS SUBSYSTEM - UBUNTU 20.04)
sudo apt install git sudo apt install build-essential sudo apt install automake autoconf sudo apt install python sudo apt install python3
warning: Audio callback had starved sending audio by 0.761900226757291 seconds. | https://www.skypack.dev/view/@novemberizing/dosbox | CC-MAIN-2022-05 | refinedweb | 1,553 | 51.44 |
[
]
Tsz Wo (Nicholas), SZE commented on HDFS-3000:
----------------------------------------------
> I tried to do this, but it's not as easy as it seems since DistributedFileSystem is taking
care of a bunch of path handling. ... are you OK leaving it as is, Nicholas?
Sure. If it is too hard, let's leave it for now and make the improvement later.
+1 patch looks good. Some minor comments which could be fixed here or later.
- "quota (count of files and directories)" => "namespace quota (count of name objects including
files, directories and symlinks)"
- "space quota (size of files and directories) for a directory." => "disk space quota (size
of files) for a directory. Note that directories and symlinks do not occupy disk space.
- You may want to put the following in one line.
{code}
public HdfsAdmin(URI uri, Configuration conf)
throws IOException {
{code}
- Remove "throws IOException" from TestHdfsAdmin.shutDownCluster().
> Add a public API for setting quotas
> -----------------------------------
>
> Key: HDFS-3000
> URL:
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs client
> Affects Versions: 2.0.0
> Reporter: Aaron T. Myers
> Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch
>
>
>: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201204.mbox/%3C1090896502.4504.1333419034780.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2015-22 | refinedweb | 190 | 67.35 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- SEE ALSO
- AUTHOR
NAME
JSON::Schema::Shorthand - Alternative, condensed format for JSON Schemas
VERSION
version 0.0.2
SYNOPSIS
use JSON::Schema::Shorthand; my $schema = js_shorthand({ object => { foo => 'number', bar => { type => 'string', required => 1 } }); # $schema is # { # type => 'object', # properties => { # foo => { type => 'number' }, # bar => { type => string }, # } # required => [ 'bar' ], # }
DESCRIPTION
JSON Schema is a useful beast, but its schema definition can be a little bit more long-winded than necessary. This module allows to use a few shortcuts that will be expanded into their canonical form.
CAVEAT: the module is still very young, and there are plenty of properties this module should expand and does not. So don't trust it blindly. If you hit such a case, raise a ticket and I'll refine the process.
js_shorthand
my $schema = js_shorthand $shorthand;
The module exports a single function,
js_shorthand, that takes in a JSON schema in shorthand notation and returns the expanded, canonical schema form.
If you don't like the name
js_shorthand, you can always import it under a different name in your namespace.
use JSON::Schema::Shorthand 'js_shorthand' => { -as => 'expand_json_schema' }; ...; my $schema = expand_json_schema $shorthand;
Types as string
If a string
type is encountered where a property definition is expected, the string is expanded to the object
{ "type": type }.
{ "foo": "number", "bar": "string" }
expands to
{ "foo": { "type": "number" }, "bar": { "type": "string" } }
If the string begins with a
#, the type is assumed to be a reference and
#type is expanded to
{ "$ref": type }.
{ "foo": "#/definitions/bar" }
becomes
{ "foo": { "$ref": "#/definitions/bar" } }
object property
{ object: properties } expands to
{ type: "object", properties }.
shorthand expanded ------------------------ --------------------------- foo: { foo: { object: { type: "object", bar: { } properties: { } bar: { } } } }
array property
{ array: items } expands to
{ type: "array", items }.
shorthand expanded ------------------------ --------------------------- foo: { foo: { array: 'number' type: "array", } items: { type: 'number' } }
required property
If the
required attribute is set to
true for a property, it is bubbled up to the
required attribute of its parent object.
shorthand expanded ------------------------ --------------------------- foo: { foo: { properties: { required: [ 'bar' ], bar: { required: true }, properties: { baz: { } bar: {}, } baz: {} } }
SEE ALSO
* JSON Schema specs -
* JavaScript version of this module -
AUTHOR
Yanick Champoux <yanick@babyl.dyndns.org>
This software is copyright (c) 2016 by Yanick Champoux.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 136:
Unknown directive: =head2Shorthands | https://metacpan.org/pod/JSON::Schema::Shorthand | CC-MAIN-2019-13 | refinedweb | 399 | 58.42 |
3506/how-can-i-map-input-to-output-without-using-dynamic
I have a pipeline that has a problematic rule. I can't use wildcards in the output, and so I am unable to map the input to the output. The problem is Snakemake launch a job for each one of the combinations. However, I the tool outputs the three required files. I was able to sort the problem with 'dynamic', but I would prefer to
mapping: {'a': ['a_1', 'a_2', 'a_3'], 'b': ['b_1', 'b_2', 'b_3']}
rule build:
input:
ini=rules.create_ini.output,
bam=lambda wildcards:
expand('mappings/{names}.bam', names=mapping[wildcards.cond],
cond=wildcards.cond)
output:
dynamic('tool/{cond}/{names}.tool')
params:
output='{cond}'
shell:
'''
module load tool
tool build --conf {input.ini} --output {params.output} # < here
The tool produces three files, not one and Snakemake is launching six jobs instead two.
Here am talking about my example you can implement the same for your code.
For the use of dynamic function you have to get an other rule where the input are the dynamic files like this :
rule target :
input : dynamic('{n}.txt')
rule source:
output: dynamic('{n}.txt')
run:
source_dir = config["windows"]
source = os.listdir(source_dir)
for w in source:
shell("ln -s %s/%s source/%s" % (source_dir, w, w))
Like this Snakemake will know what is has to attribute for the wildcard.
Hint when you use a wildcard you always have to define it. Here calling dynamic in the input of target rule will define the wildcard '{n}'.
I have a pipeline that has a ...READ MORE
In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)):
In ...READ MORE
The code that I've written below. The ...READ MORE
You can find the explanation and implementation ...READ MORE
def add(a,b):
return a + b
#when i call ...READ MORE
Actually in later versions of pandas this ...READ MORE
Hi there, instead of sklearn you could ...READ MORE
You can use the pandas library to ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
As far as I can see you ...READ MORE
OR | https://www.edureka.co/community/3506/how-can-i-map-input-to-output-without-using-dynamic | CC-MAIN-2019-39 | refinedweb | 362 | 69.07 |
C++ Quiz
You've answered 0 of 76 questions correctly. (Clear)
Question #35 Difficulty:
According to the C++11 standard, what is the output of this program?
#include <iostream> #include <vector> int main() { std::vector<int> v1(1, 2); std::vector<int> v2{ 1, 2 }; std::cout << v1.size() << v2.size(); }
Hint:
Which constructors are invoked for v1 and v2? How is a vector constructed from an initializer list?
Problems? View a hint or try another question.
I give up, show me the answer (make 3 more attempts first).
Mode : Training
You are currently in training mode, answering random questions. Why not Start a new quiz? Then you can boast about your score, and invite your friends. | http://cppquiz.org/quiz/question/35?show_hint=1 | CC-MAIN-2017-51 | refinedweb | 117 | 77.53 |
I have been following the ember quick start guide to create an app that displays some data () but instead of displaying just a javascript array with scientists names, I want to display the products from the following json.
I have placed the json file in the public folder.
It looks like:
{ "products": [ { "_id": "58ff60ffcd082f040072305a", "slug": "apple-tree-printed-linen-butterfly-bow-tie", "name": "Apple Tree Printed Linen Butterfly Bow Tie ", "description": "This fun 40 Colori Apple Tree Printed Linen Butterfly Bow Tie features a design of various apple trees built from tiny polka dots. The back of this bow tie features a polka dot print in complementing colours which when the bow tie is tied will pop out from behind making for a subtle yet unique detail. The playful design, stand out natural linen texture, and colourful combinations make this bow tie a perfect accessory for any outfit!\n", "standard_manufacturer": "58cbafc55491430300c422ff", "details": "Size: Untied (self-tie) bow tie with an easily adjustable neck strap from 13-3/4'' to 19'' (35 to 48 cm)\nHeight: 6 cm\nMaterial: Printed 100% linen\nCare: Dry clean\nMade in Italy", "sizes": [ { "value": "Violet", "size": "57722c80c8595b0300a11e61", "_id": "58ff60ffcd082f0400723070", "marked_as_oos_at": null, "quantity": -1, "stock": true, "id": "58ff60ffcd082f0400723070" },
and so on.
My code for the model of the route for displaying the list is as follows
import Ember from 'ember'; export default Ember.Route.extend({ model() { //return ['Marie Curie', 'Mae Jemison', 'Albert Hofmann']; return Ember.$.getJSON("/products.json"); } });
I have followed the tutorial exactly except for the
return Ember.$.getJSON("/products.json");
line in scientists.js. My data is not being displayed and the error i get in the ember inspector is
compiled.js:2 Uncaught TypeError: Failed to execute 'getComputedStyle' on 'Window': parameter 1 is not of type 'Element'. at E (compiled.js:2) at Object.u (compiled.js:25) at compiled.js:25 E @ compiled.js:2 u @ compiled.js:25 (anonymous) @ compiled.js:25
I am very new with ember and fairly new with js. Any help appreciated! Thanks! | https://discuss.emberjs.com/t/ember-app-cannot-load-json-data-from-local-file/13254 | CC-MAIN-2018-34 | refinedweb | 336 | 56.35 |
Tim Ellison wrote:
>>Matt> Wasn't one of the issues here the theoretical "What
>>Matt> happens when Sun defines a public VMClass class in
>>Matt> java.lang?"
>>
>>There's no bad (i.e., security violating) situation that can arise
>>from this. It is no different from Sun adding any other class that is
>>not yet implemented in Classpath.
>
> There would be a namespace collision though (I'd guess the risk is small
> tho')
Not an issue: code that relies on Sun's new class wouldn't compile
or run properly with the yet-to-be updated Classpath in either case!
-Archie
__________________________________________________________________________
Archie Cobbs * CTO, Awarix * | http://mail-archives.apache.org/mod_mbox/harmony-dev/200511.mbox/%3C436BF33D.9000706@dellroad.org%3E | CC-MAIN-2017-34 | refinedweb | 106 | 71.95 |
deluxe single station Home Gym with iron cover and belt Fitness equipment HRGYM16 with cover
100 Pieces (Min. Order)
Home Gym Equipment Iron X-Mount Fitness Anchor Strap For Suspension Exercise
US $3.88-4.68 / Piece
1 Piece (Min. Order)
Gym Fitness Iron Vertical Triangle Dumbbell Rack
US $39.9-58.9 / Set
20 Sets (Min. Order)
steel weight stack /weight blocks foe sale/cast iron weight lifting plates for strength gym equipment
US $1050-1250 / Ton
1 Ton (Min. Order)
Cast Iron Fractional Top Grade Gym Weight Plate
US $1.0-3.5 / Kilograms
2000 Kilograms (Min. Order)
Three sizes Iron Door Gym Pull Up Bars
US $2-8 / Piece
200 Pieces (Min. Order)
import rubber bands iron door gym with resistance bands
US $0.35-1.4 / Piece
1000 Pieces (Min. Order)
sunyounger ab wheel, bar/pull up bar/iron workout gym, good leg exercise
US $2.0-2.1 / Pieces
1000 Pieces (Min. Order)
Top selling 20KG iron bar weight vest Gym factory
US $40-45 / Piece
100 Pieces (Min. Order)
Multifunction fitness equipment pull up bar / Door gym pull up bar become iron man
US $5-6 / Piece
500 Pieces (Min. Order)
Hot Sale Iron Door Gym Pull Up Bar for Body Building
US $3.5-4.2 / Set
100 Sets (Min. Order)
Iron door gym pull up bar multi function bar wholesale
US $3.89-10 / Piece
200 Pieces (Min. Order)
multifunction fitness equipment pull up bar / Door gym pull up bar become iron man
US $7-9 / Carton
2000 Cartons (Min. Order)
iron door gym
US $4-5 / Piece
50 Pieces (Min. Order)
New style hot sale iron door gym with resistance bands
US $3-10 / Piece
500 Pieces (Min. Order)
Hot selling Gym Bar Iron Door Gym Chin-up/Pull up Bar Body Fitness Equipment
US $3.8-4.5 / Piece
100 Pieces (Min. Order)
High quality cheap door iron material gym pull up bar for home gym exercise
US $1-4 / Piece
300 Pieces (Min. Order)
Training home used body-building iron material door gym
US $3.5-4 / Piece
300 Pieces (Min. Order)
alibaba express china good quality cast iron 10kg gym shipping from china dumbbell
US $0.69-60 / Piece
1 Piece (Min. Order)
Gym Weight Lifting Accessories Cast Iron Weight Stacks Plates
US $1-1.3 / Kilogram
1000 Kilograms (Min. Order)
Fitness equipment for gym used Cast Iron Dumbbell
US $1.5-3 / Kilogram
1000 Kilograms (Min. Order)
Iron door gym chin-up bar factory price
US $5-10 / Piece
500 Pieces (Min. Order)
gym horizontal bar ,high quality iron workout gym ,perfect gym pull up bar
US $5.5-7.5 / Pieces
50 Pieces (Min. Order)
3-10Pairs Steel Multideck Dumbbell Rack For Commercial Gym USE
US $30-50 / Set
10 Sets (Min. Order)
Door Gym Weider X Factor For Body Training
US $18.0-18.0 / Pieces
100 Pieces (Min. Order)
HJ-A196 Commercial Fitness Equipment /Gym/A-Frame Barbell Rack /gym tools
US $55.8-67 / Set
50 Sets (Min. Order)
Indoor Body Building Machine Portable Weider X Factor Door Gym
US $18-21 / Piece
200 Pieces (Min. Order)
Cheap price professional gym used adjustable weight bench press for sale
US $20-110 / Piece
1 Piece (Min. Order)
China Adjustable muscle training and running exercise Neoprene ankle wrist Weight in gym
US $2-3.5 / Piece
200 Pieces (Min. Order)
LJ-5906-8 hot Sale commercial gym used 8-Multi stations gym equipment
US $690-3800 / Piece
1 Piece (Min. Order)
Workout Exercise Roller AB Wheels For Gym And Home Exercise
US $1.5-3 / Set
100 Sets (Min. Order)
Sling Exercise Tower 200 Resistance Training Door Gym Bands
US $19.5-22 / Piece
200 Pieces (Min. Order)
Top Quality Ab Wheel With Soft Foam Handles Gym/Home Ab Roller
US $2.2-3.0 / Pieces
100 Pieces (Min. Order)
Vinyl Dumbbell Set Ladies Aerobic Training Weights Strength Training Home Gym (1.5x2kg)
US $1.5-2.0 / Kilograms
2000 Kilograms (Min. Order)
dumbbell aerobics dumbell for home and gym lose weight fitness equipment
US $0.4-2.4 / Set
1000 Sets (Min. Order)
custom weight lifting training wrist support wraps gym lifting straps
US $1.2-4.8 / Piece
500 Pieces (Min. Order)
Gym Spinning Bikes,Home Exercise Bikes,Spin Bikes
US $262-280 / Unit
10 Units (Min. Order)
Multi-function Outdoor Fitness Equipment Gym Exercise Parallel Bars
US $14-20 / Piece
1 Piece (Min. Order)
- About product and suppliers:
Alibaba.com offers 16,060 iron gym products. About 27% of these are gym equipment, 8% are other fitness & bodybuilding products, and 1% are fitness & yoga wear. A wide variety of iron gym options are available to you, such as free samples, paid samples. There are 16,146 iron gym suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Pakistan, and Vietnam, which supply 98%, 1%, and 1% of iron gym respectively. Iron gym products are most popular in North America, Western Europe, and Domestic Market. You can ensure product safety by selecting from certified suppliers, including 4,004 with ISO9001, 1,904 with ISO14001, and 1,329 with Other certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show iron gym or other products of your own company? Display your Products FREE now!
Related Category
Supplier Features
Supplier Types
Recommendation for you
related suppliers
related Guide | http://www.alibaba.com/showroom/iron-gym.html | CC-MAIN-2018-22 | refinedweb | 911 | 77.23 |
One of the greatest difficulties I found when I started working with BizTalk 2004 was the lack of documentation about the SQL Adapter. In this article, I'm going to demonstrate how we can use this adapter in an Orchestration of BizTalk.
To build this example, we're going to use the Northwind database. We're going to simulate a hypothetical situation where we receive an XML message as a file, containing the order number, a customer ID, and the date of the order. In the orchestration, we will use SQL Server to search the additional information about the customer, using the SQL Adapter.
We'll start this article by creating a new BizTalk Server project in Visual Studio. In the Visual Studio .NET menu, select the "New Project" option, and for the type of project, select "BizTalk Projects". Select the template "Empty BizTalk Project" and create a project named OrderManager.
Now that we have the project, we're going to create two maps that we'll use in the project, one for the input message and one for the output message.
Right-click on the project in the Solution Explorer, and select the "Add New Item" option, then select the "Schema" item. Create a schema named "IncompleteOrder".
Click on the root element of the schema and change the property "NodeName" to "Order". After that, right-click on this node and select the "Insert Schema Node" option. Inside this option, select "Child Field Element". Change the NodeName property of this new node to OrderId. Repeat this operation to create two additional elements named "OrderDate" and "CustomerId". Your schema should look like this:
NodeName
Order
OrderId
OrderDate
CustomerId
Now, we're going to create the schema with the complete information of the order. Right-click on the project in the Solution Explorer and select the "Add New Item" option. Then, select the item "Schema" and name it "CompleteOrder".
When the schema shows up, rename the "Root" element to "CompleteOrder". After that, create the child elements: "OrderId", "OrderDate", "CustomerID", "CustomerName", "CustomerAddress", "CustomerCity", "CustomerCountry". The complete schema can be viewed in the image below:
Root
CompleteOrder
CustomerID
CustomerName
CustomerAddress
CustomerCity
CustomerCountry
In order to test our solution, we need to create some test messages. BizTalk Server is capable of creating these messages for us. Right-click on the "IncompleteOrder.xsd" schema in the Solution Explorer and select the "Properties" option. In the Properties, select the property "Output Instance Filename". In this field, type the value "C:\IncompleteOrder.XML". Click OK to close the window. Right-click on the schema file again in Solution Explorer and select the "Generate Instance" option. Open Windows Explorer and check if the file was created in the indicated place.
If you check the file created in Visual Studio, you'll see that the values generated are random. We're going to modify these values to use some valid information. Replace the OrderId field to the number 1. In the OrderDate field, type the value "2005-03-01", and in the field "CustomerId", the value "ALFKI". The XML file should have a similar structure as the file below:
<ns0:Order xmlns:
<OrderId>1</OrderId>
<OrderDate>2005-03-01</OrderDate>
<CustomerId>ALFKI</CustomerId>
</ns0:Order>
Save the file, we'll need it later.
In order to use the SQL Server resources, we'll need to create a Stored Procedure capable of returning the data from the customer that we will place in the order.
Open the Enterprise Manager of SQL Server and select the Northwind database. Select the "Stored Procedures" applet, and right-click it, select the option "New Procedure". A new procedure should be created as follows:
CREATE PROCEDURE proc_GetCustomer
(@CustomerId char(5))
AS
SELECT * FROM Customers
WHERE CustomerId=@CustomerId FOR XML AUTO, XMLDATA
GO
Don't forget to include the XMLDATA parameter in the end of the procedure, this will generate the necessary information for the SQL adapter. This parameter will be removed later.
XMLDATA
Now that we've created all the necessary structures for the solution, we're going to create the structure to call SQL Server. For this, we will create a new item generated from the Solution Explorer. Right-click on the project in the Solution Explorer and select the "Add Generated Items" option. In the list of items, select the Add Adapter option and click on the Open button. The following screen will show up:
In this screen, select the adapter of type "SQL" and click Next (the other options can stay with default values, unless the database of your BizTalk server is in a remote server).
In the first screen, click on the Set button and provifde the information to connect to your SQL Server instance. Select "Northwind" as the initial catalog. When the connection string is set, click Next.
Th next screen shows information about the schemas that will be used to transport the information to and from SQL. In the Target Namespace option, type "". In the Port-Type, select "Send Port", since we're going to send a request to SQL Server and receive a response. In the property "Request root element name", type "InCustomer", and in the property "Response root element name", type "OutCustomer". The screen should be as in the picture below:
Click on the Next button. In the screen "Statement type information", select the option "Stored Procedure". Click Next. In the combo box for selecting the procedure, select proc_GetCustomer. The stored procedure parameters will show up. Click on the first parameter (near the check box... do not check the check box, just click near it until the prompt appears) to enter the parameter information. Type "ANTON", that is a valid customer ID. Click on the Generate button and you will see that the script used to execute the procedure will show up in the bottom of the screen. The result can be observed as in the image below:
proc_GetCustomer
This information will be used by the SQL adapter to generate the initial schema, they are not used later in the project. Click on the Next button, and in the next screen, click the Finish button. A new schema and a new orchestration will be created in your project.
The created schema (SQLService) contains the request and response information for the stored procedure. The orchestration contains some types (port type) used to call the SQL adapter.
SQLService
Right-click on the orchestration in Solution Explorer and select the "Rename" option. Rename it to "ProcessOrder.odx".
Open the orchestration file and click on the white area in the orchestration designer (between the red and green indicators that indicate the end and beginning of the process). Check the property windows, and change the TypeName property from Orchestration_1 to ProcessOrderOrch.
TypeName
Orchestration_1
ProcessOrderOrch
In order to use our messages within the orchestration, it's necessary to create message variables. To do this, we'll need the Orchestration View window. Click on the "View" menu, select "Other Windows", and select the "Orchestration View" window.
In the orchestration view, right-click on "Messages" folder and select the New Message option. In the identification, type "msgIncompleteOrder" and select the "OrderManager.IncompleOrder" and its schema (the schema is available in the schemas item).
Create three additional messages with the following identifiers/schemas:
msgCompleteOrder
OrderManager.CompleteOrder
msgGetCustomerReq
OrderManager.procedureRequest
msgGetCustomerResp
OrderManager.procedureResponse
Now, we will create the elements used in the orchestration. In the toolbox, search for the "Scope" shape and drag it to just below the green indicator in the designer area. Rename the scope shape to "Do Updates". Change its transaction type to "None".
Now, drag a "Receive" shape from the toolbox to the area inside the scope shape. In the shape properties, set the properties below:
Name
Receive Incomplete Order
msgIncompleteOrder
Activate
True
Just below the receive shape, drag a "Construct Message" shape. Use the following properties:
Messages Constructed
Now, drag a "Transform" shape inside the empty area inside the "Construct Message" shape. Select the "Input Messages" property and click the (...) button. A new window will open up with the mapping options. In the "Fully Qualified Map Name" box, type "ProcessOrder.IncompleteOrder_To_SQLRequest". Click on the "Source" option and select the "msgIncompleOrder" as the source message. Click on "Destination" and select the "msgGetCustomerReq" as the destination message. Your "Transform Configuration" screen should look like the one below. When you finish, click OK.
In the map editor, drag the CustomerID field from the IncompleteOrder schema to the CustomerID field on the GetCustomerReq schema. Note that the destination schema represents the parameters used to call the stored procedure. Your map should look like the picture below:
IncompleteOrder
GetCustomerReq
Save the map and go back to the orchestration file. Change the name of the "Transform" shape to "Create Request".
Now, we'll create a "Send" shape that will send our request to the SQL Server Adapter. Drag a new "Send" shape just below the construct message shape and use the following properties:
Send SQL Request
After the Send Shape, create a new Receive Shape with the following properties:
Receive SQL Resp
After the Receive shape, drag a new construct message with the following properties:
Construct Response
Drag a new "Transform" shape to our newly created "Construct" shape. Select the property "Input Messages" and click on the (...) button. The transform configuration window will show up again. In the fully qualified name field, type ProcessOrder.SQL_To_CompleteOrder. In the Source option, select the "msgGetCustomerRep" and the "msgIncompleteOrder" (that is, two messages as source). Select the "Destination" option and select the "msgGetCustomerReq" option. The screen should look like the one below:
Click OK and in the map editor, create a map with the following links:
Close and save the map, and go back to the orchestration. Change the name of the "Transform" shape to "Create Response".
Now, we'll create a new "Send" shape that will send the final response to our customer. Create a new "Send" shape below the Construct Message shape, with the following properties:
Send Response
At this point, your orchestration should look like the one shown below:
Now that we've created the shapes for our orchestration, we will create the logical ports. Right-click the "Port Surface" (on the left) of the orchestration and select the "New Configured Port" option. The "New Port Wizard" will show up to create the logical port and port types for us, click Next on the first screen. On the next screen, type "rpReceiveIncomple" at the Port Name box and click "Next". In the port information screen, make sure the "Create New Port Type" option is selected, and type rpReceiveIncompleteType as the port type name. Leave the other options in their default values and click Next. In the communication direction options, leave the default "I'll always be receiving messages on this port" option selected, click Next, and the Finish.
Click on the port surface again (now on the right side) and select the "New Configured Port" option again. The wizard will show up again, click Next. On the next screen, type srpGetCustomer as the port name, and click Next. On the next screen, select the "Use Existing Port Type" option and select the "OrderManager.SQLExec" port type. This port-type is created by the SQL adapter and contains the necessary configuration to call our stored procedure. Click Next. In the communication direction option, select the "I'll always be sending a request and receiving a response" option, click Next, and then Finish.
Now, let's create our last port (whew!). Click on the "Port Surface" again (right side now) and select "New Configured Port" option. Our "already-known" wizard will show up again... click Next. In the port name box, type spSendCompleteOrder, click Next. In the port information screen, select the "Create New Port Type" option, and in the port type (on the next screen), type spSendCompleteOrderType. On the "Communication Direction" option, select the "I'll always be sending messages on this port" option. Click Next, Finish, and we're ready! :)
To finish our orchestration we need to connect the receive and send shapes to our ports. To do this, simply drag the connectors from each shape to each logical port on the port surface. The orchestration should look like in the image below:
Now that our orchestration is ready, we need to configure our assembly so that it can be used by BizTalk. To do this, we need to generate a pair of keys for it, using the SN.EXE application. In the Start menu, look for the Visual Studio 2003 folder, and select the "Visual Studio .NET 2003 command prompt". In the command prompt, change the current directory to C:\ and type "sn - k key.snk". This command should generate our key-pair. In the Solution Explorer, right-click on the project in the Solution Explorer and select "Properties". In the properties of the project, select the "Assembly" tab, and in the item Assembly Key File, type C:\key.snk, as in the image below:
Now, right-click on the project and select the "Build" option. If there are no errors, right click on the project again and select the "Deploy" option. Our project should be deployed to BizTalk.
Now, we're going to configure BizTalk to receive the messages and send them to our orchestration. In Visual Studio, select the View menu and select the "BizTalk Explorer" option, a new window will show up with the BizTalk Explorer. Expand your BizTalk server instance name and right-click on "Receive ports", selecting the "Add Receive Port" option. On the Port Type, select "One-way Port". In the Port Name box, type rpReceiveOrder and click Ok. You'll see that our new port will be created.
Now, we'll create our receive location for this port. Expand the port you have created in the previous step and right-click on "Receive Locations". In "Transport Type" property, select "FILE". In the "Address (URI)" option, click on the (...) button and set the receive folder as "C:\Receive" (or the folder of your choice). Click OK. In the Receive Handler property, select "BizTalk Application", and in the "Receive Pipeline" property, select "Microsoft.BizTalk.DefaultPipelines.XMLReceive". Click OK to create the receive location. It will be necessary to create the C:\Receive folder, in order to get the solution working. After you create the receive location, right-click on it and select "Enable".
Microsoft.BizTalk.DefaultPipelines.XMLReceive
Now, we'll create the send port to send information to SQL Server. Right click on the "Send Port" and select the "Add Send Port" option. On the Port Type, select "Static Solicit-Response" port. On the Port Name, type spSQL, and in the Transport Type, select SQL. In the Address (URI), click on the (...) button. Complete the "SQL Transport Properties" screen with the information below. Take care of the "Document Namespace" property and the "Response Root Element Name". These properties should contain the same namespaces used by the generated schema in the earlier steps.
On the Send Port properties, change the Send Pipeline property to Microsoft.BizTalk.DefaultPipelines.XMLTransmit and the Receive Pipeline property to Microsoft.BizTalk.DefaultPipelines.XMLReceive. Click OK and Finish.
Microsoft.BizTalk.DefaultPipelines.XMLTransmit
Now, we'll create the last port. Right-click on "Send Ports" and select the "Add New Port" option. Select "Static One-way port" as the port type, and name it spSendCompleteOrder. In the Transport Type option, select FILE. In the Address (URI) option, click on the (...) button and set C:\Output as the destination (this folder should be created manually). Click OK and set the "Send Pipeline" property to "Microsoft.BizTalk.DefaultPipelines.XMLTransmit". Click OK to create the port.
Now, right-click on each one of the ports and select the "Start" option.
Now, we need to bind our orchestration to the physical ports. In the BizTalk Explorer, expand the Orchestrations applet and right-click on our orchestration, selecting the "Bind" option. Set the binding information as in the image below:
On the Host configuration, select the default host (BiztalkApplicationHost). Click OK.
Right-click on our orchestration and click "Start". Click OK on the Express Start window as well.
Before we start testing our orchestration... do you remember that stored procedure we created in the earlier steps? Well, in order to get our solution working, we need to change some things on it. Go to the Enterprise Manager and edit the procedure, removing the XMLDATA option in the SELECT statement. This option is used to generate the schemas for BizTalk. Since our schemas are already created, we don't need this option anymore.
Now, get the input file created in the earlier steps and put it in the C:\Receive folder. The file should be extracted. Go to the C:\Output folder and wait until the complete order message appears.. | https://www.codeproject.com/Articles/12894/Using-a-SQL-Adapter-in-BizTalk-Server-2004?fid=262696&df=90&mpp=25&noise=1&prof=True&sort=Position&view=Normal&spc=Relaxed&fr=1 | CC-MAIN-2018-26 | refinedweb | 2,792 | 65.62 |
The Need of CSS Style Guides
Hoang Nguyen
・4 min read
You Think You Know CSS (7 Part Series)
Any programming languages lacking built-in namespaces will desperately need coding style guides and CSS is obviously one of them. CSS codes are extremely opinionated across teams and individuals, most companies maintain their own CSS style guides generated manually based on popular style guides with some modification.
You Think You Know CSS is an original series about some core CSS topics
which you think you know but not deep enough to talk about them comfortably.
- How CSS Works Under the Hood
- Beloved CSS Frameworks
- CSS-in-JS Styling Technique
- Long-standing CSS Methodologies
- Extending standard CSS by preprocessors
- The Opinionated Decision on CSS Resets
- The Need of CSS Style Guides (this post)
A CSS style guide is a set of standards and rules on how to use and write CSS code in a consistent and maintainable way.
The Motivation
There isn’t much we can do to change how CSS works, but we can make changes to the way we author and structure it.; the problem with CSS isn’t CSS, it’s humans. We can only keep so much in our heads, and sharing that knowledge across teams, even teams who really care about consistent styling is really hard.
Every project needs some organization. Throwing every new style you create onto the end of a single file would make finding things more difficult and would be very confusing for anybody else working on the project.
How do you decide which elements should get the styling magic you wish to bestow upon it? How do you make it easy to understand how your site and your styles are organized?
You don't need to maintain a custom CSS style guide on your own but you need to find one to follow, failing to do so will put you in a position of maintaining an ugly unmaintainable codebase sooner or later.
The Benefits
Following a CSS style guide can create a beautiful code base which will make any developer love working on a project. They enable a codebase to be flexible, easily scalable and very well documented for anyone to jump in and work on with other developers.
- Keep stylesheets maintainable and scalable
- Keep code transparent, sane, and readable
- Set the standard for code quality in team
- Give developer feeling of familiarity
- Promote consistency across projects
- Increase developer productivity
The Recipes
Ordering properties is just one choice you have to make that makes up a complete styling strategy. Naming is a part of it. Sectioning is a part of it. Commenting, indentation, overall file structure, etc. It all makes up a complete CSS style guide.
- Coding format — spacing, characters width, indenting, whitespace, alignment
- Commenting convention — what to be documented and removal in production using minification
- Naming convention — following a CSS methodology like BEM or not
- Coding rules — decisions about defining global styles, variables, mixins, functions, typography, unit usage
- Reusable components — consideration of putting some common components to be used across products
The Options
Following style guides are opinionated, but they have been repeatedly tried, tested, stressed, refined, broken, reworked, and revisited over a number of years on projects of all sizes:
Maintained by companies: Airbnb, Dropbox, WordPress, Google,
Bootstrap, ThinkUp,Workable, CodePen
Maintained by individuals: Harry Roberts, Mark Otto, Nicolas Gallagher
Design system is something bigger which includes CSS style guide inside, following design systems are well developed and maintained by companies:
Other resources might help you build good custom style guide on CSS or Sass automatically or manually:
- Style Guide Boilerplate
- Website Style Guide Resources
- BigCommerce Sass Guidelines
- Sass Guidelines
- SMACSS
Anyone who has attempted to maintain a CSS style guide over a long period of time will attest that it is a very difficult process. This is bad because once your style guide falls out of sync with your applications it has entirely lost its purpose.
It's best to have a living CSS style guide auto generated from codebase or just follow the one that is being well developed and maintained by community.
The Takeaways
CSS style guide aims at improving collaboration, code quality, and enabling supporting infrastructure. It contains methodologies, techniques, and tips that you and your team should follow.
You could generate a style guide automatically — kss, mdcss, stylemark, nucleus — or manually. Any method would work, as long as it does not take too much time.
Using style guide along with formating tools like Prettier or editorconfig that could help your team produce well-formatted code.
Each style guide has or should have the same goals — to improve consistency and usability. By standardizing your code, and by applying uniform design principles, you could create a unique cookbook that could help your team build the product more efficiently and consistently.
You Think You Know CSS (7 Part Series) | https://dev.to/hoangbkit/the-need-of-css-style-guides-3lhk | CC-MAIN-2019-47 | refinedweb | 810 | 52.94 |
An object oriented wrapper around a recursive mutex. More...
#include <rtt/os/Mutex.hpp>
An object oriented wrapper around a recursive mutex.
A mutex can only be unlock()'ed, by the thread which lock()'ed it. A trylock is a non blocking lock action which fails or succeeds.
Definition at line 208 of file Mutex.hpp.
Destroy a MutexRecursive.
If the MutexRecursive is still locked, the RTOS will not be asked to clean up its resources.
Definition at line 227 of file Mutex.hpp.
Lock this mutex, but don't wait longer for the lock than the specified timeout.
Implements RTT::os::MutexInterface.
Definition at line 264 of file Mutex.hpp.
Try to lock this mutex.
Implements RTT::os::MutexInterface.
Definition at line 250 of file Mutex.hpp. | http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1os_1_1MutexRecursive.html | CC-MAIN-2015-18 | refinedweb | 128 | 61.83 |
In February 2013 was released PhoneGap 2.4.0. You could download the latest PhoneGap 2.4.x - or or PhoneGap project from the Github :. This article is a short update about new features and improvements in Cordova 2.4.0 compared to version 2.3.0 for Windows Phone and Windows Store applications. PhoneGap updates for iOS could be found here. Cordova 2.4.0 updates for Android are available in this article. Most of changes and new features are related to iOS. General release notes for Cordova 2.4.0 are available here.
Cordova 2.4.0 for Windows Store Applications.
In the article “Using PhoneGap in Windows 8 Store Applications” you can see details how to start with PhoneGap for Windows 8. Unlike the version 2.3.0 last Cordova 2.4.0 has no issues with templates and you can export it without problems from the template project: [phonegap-2.4.0 path]\windows8\template\ .
Cordova 2.4.0 for Windows Phone 8 Applications.
PhoneGap for Windows Phone 8 offers two pre-built templates: 'Stand-Alone' and ‘Full’ templates. More about Cordova pre-built templates you can find in this blog about “Using PhoneGap in Windows Phone 8 Applications”. Unlike PhoneGap 2.3.0 in version 2.4.0 pre-built templates are correctly set up and can be used. You need not to create a new project template.
Conclusions: PhoneGap 2.4.0 is a more stable version than its predecessor. In this version are fixed all issues related to the Windows 8 and WP 8 project templates. This version contains mainly updates and new features for iOS. API for WIndows Phone 7 and Windows Phone 8 is more consistent: namespaces and references are now closer. For WP 7 WP7CordovaClassLib is renamed to WPCordovaClassLib (like in PhoneCap for WP 8). I strongly recommend that you use the latest version of PhoneGap 2.4.0.
Follow news from Infragistics for more information about new Infragistics events.
As always, you can follow us on Twitter @mihailmateev and @Infragistics and stay in touch on Facebook, Google+andLinkedIn!
Be the first one to add a comment. Please Login or Register to add comment. | http://www.infragistics.com/community/blogs/mihail_mateev/archive/2013/02/22/phonegap-2-4-0-for-windows-phone-8-and-windows-store-applications.aspx | CC-MAIN-2014-42 | refinedweb | 364 | 70.19 |
plot a graph in Java
plot a graph in Java I need to plot a graph in Java but the samples... need to plot them as I receive them. This requires calling repaint() again... and the final graph may be quite congested. So please guide me on this.
Thanks
awt in java
awt in java using awt in java gui programming how to false the maximization property of Beginners
java-awt how to include picture stored on my machine to a java frame... "\"
try using this......
img = Toolkit.getDefaultToolkit().getImage("C... information,
Thanks
Graphical calculator using AWT - Java Beginners
Graphical calculator using AWT Hi Sir,
Thanks for the reply.....and is it the same code we need implement on Graphical Calc using Swing?
Here is the prg code... Calculator example");
this.setResizable(false);
}//end constructor
Graphical calculator using AWT - Java Beginners
Graphical calculator using AWT hi Sir,
I need a source code for the following prgm...pls help me..
Implement a simple graphical calculator using...://
plotting a graph - Java Beginners
plotting a graph pls help me out regarding plotting a graph. I have done that through LiveGraph API. but now i required the graph to be displayed on the gui- window. and whatever inputs i will pass it must change the graph
Applet Graph
columns-names by using check box and click on graph button,
3..then i have to get a graph between those columns in a applet
plz send me the code
thank u...Applet Graph hai,
iam new to java plz help me..
1..i have a table
radar graph in java
radar graph in java Hi All
I am developing a java application... {
public DefaultCategoryDataset dataset;
public SpiderWebPlot plot...);
SpiderWebPlot plot = new SpiderWebPlot(dataset
Java AWT Package Example
Java AWT Package Example
... to handle different key events
on the Java Awt component by using the event...
This is a simple program of java awt package which constructs a look like a form by using
awt
Java AWT Applet example how to display data using JDBC in awt/applet
java-graph help - Java Beginners
java-graph help sir/madam
I wanted to design a shortest path Algorithm.
i wanted to show the shortest path in graphical format in the way it looks... to print the variable value in the diagram.
i know to write constant value by using
Graph in JAVA SWING
Graph in JAVA SWING Hi,
How to draw a simple GRAPH in JAVA SWING ?
...,
Please visit the following link:
Bar Graph in Java Swing
Thanks
Drawing Graphs - Swing AWT
Drawing Graphs hi i am doing a small project in java swings . there is a database table having two columns A,B i want to plot a graph to that values in database the graph must be interactive graph
Multi line graph - Java Beginners
Multi line graph Hi,
I want a multi line graph on a single chart using jfree in java....
Can you please let me know the code...
thanks in advance Hi Friend,
Loginto the below mentioned URl & download the file
java awt package tutorial
Java AWT Package
In Java, Abstract Window Toolkit(AWT) is a platform.... The implementation of the user interface elements provided by the AWT is
done using... is used by each AWT component to display itself on the
screen. For example - Swing AWT
market chart this code made using "AWT" . in this chart one textbox when user...,
For solving the problem visit to :
Thanks
java swings - Swing AWT
write the code for bar charts using java swings. Hi friend,
I am.... swings I am doing a project for my company. I need a to show
AWT basics
, graphics, and user-interface for the desktop
application using Java technology.
Now a day?s developers are using Swing components instead of AWT to develop...
AWT
documentation
AWT example
AWT manual
AWT tutorial
java-swings - Swing AWT
java-swings How to move JLabel using Mouse?
Here the problem is i...://
Thanks.
Amardeep... at runtime by moving labels using mouse.
Plz tell me the correct way to solve
Java AWT
Java AWT What interface is extended by AWT event listeners
Java AWT
Java AWT What is meant by controls and what are different types of controls in AWT
Create a Container in Java awt
Create a Container in Java awt
... on the frame has been specified south of the frame by using BorderLayout.SOUTH
since the position of the text area has been specified the
center of the frame using
AWT Tutorials
AWT Tutorials How can i create multiple labels using AWT????
Java Applet Example multiple labels
1)AppletExample.java:
import javax.swing.*;
import java.applet.*;
import java.awt.*;
import
java + excel data +graph - JDBC
java + excel data +graph i am doin' a project in which i need... a graph with the facility of drawing a trendline(in the graph)...also i need to be able to select points in the graph and draw the graph again between those
JAVA QUESTION
JAVA QUESTION How to view image on Frame in swing(or)awt in Java
Applet program for drawing picture and graph - Applet
the program(code) of drawing picture and graph in Applet. Hi Friend,
Please visit the following links:
Hope
Combined Category Plot Example using JFreeChart
Combined Category Plot Example using JFreeChart
This Example shows you how to create a Combined Category Plot chart using
JFreeChart. Code for the chart friend,read for more information,
java - Swing AWT
java How can my java program run a non-java application. Will it be possible to invoke a gui application using java
java - Swing AWT
java Write Snake Game using Swings
Java Swings problem - Swing AWT
Java Swings problem Sir, I am facing a problem in JSplitPane. I want... pane. For example, if the split pane is of dimension (0,0,100, 400), then divider... of using BasicSplitPaneUI, BasicSplitPaneDivider etc etc but no solution. Please give Code - Swing AWT
Java Code Write a Program using Swings to Display JFileChooser that Display the Naem of Selected File and Also opens that File
java - Swing AWT
What is Java Swing AWT What is Java Swing AWT
Java Program - Swing AWT
Java Program A program to create a simple game using swings. Hi Friend,
Please visit the following link:
Thanks
Java AWT Package Example
java - Swing AWT
java i need to set the popup calender button in table column&also set the date in the same table column using core java-swing
Java Code - Swing AWT
Java Code How to Make an application by using Swings JMenuBar and other components for drawing various categories of Charts(Line,Bar etc
java - Swing AWT
java how can i add items to combobox at runtime from jdbc Hi Friend,
Please visit the following link:
Thanks Hi Friend
jfree - Java Beginners
jfree how use the "import jfree" on jdk1.5.0_6,,
or how to plot data in xy line on jdk1.5.0_6
Java - Swing AWT
....("Paint example frame") ;
getContentPane().add(new JPaintPanel
java - Swing AWT
java Hello Sir/Mam,
I am doing my java mini... it either on panel or frame.But in my project,i have to image using JFileChooser... for upload image in JAVA SWING.... Hi Friend,
Try the following code
Java - Swing AWT
Java program to read,add,update,delete the database using swing and servlet iam getting the xml result then i'll save my result in folder using jfilechooser in swings.give me any examples
JAVA - Swing AWT
JAVA Sir how can i design flow chart and Synopsis for random password generation by using swing in java . Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
import java.awt.*;
import
java - Swing AWT
java hi can u say how to create a database for images in oracle and store and retrive images using histogram of that image plz help me its too urgent
java - Swing AWT
java Hi,
I override the mouseClicked interface in my code and get the X and Y coordinates of the event using e.getX() and e.getY().
Is there a way to obtain the text at that location? Hi Friend,
Please clarify
Java - Swing AWT
Java program to read,add,update,delete the database using swing and servlet Hello
Just Refer Don't Copy the content of this URL
Thanks
Rajanikant Hi friend,
To solve
query - Swing AWT
java swing awt thread query Hi, I am just looking for a simple example of Java Swing
Java Program - Swing AWT
Java Program Write a Program that display JFileChooser that open a JPG Image and After Loading it Saves that Image using JFileChooser Hi Friend,
Try the following code:
import java.io.*;
import java.sql.
awt list item* - Swing AWT
information.
Thanks...awt list item* how do i make an item inside my listitem...);
choice.add("Java ");
choice.add("Jsp");
choice.add("Servlets
how to implements jdbc connections using awt?
how to implements jdbc connections using awt? My name is Aditya... example awt with jdbc.
We are proving you a simple application of Login and Registration using java swing.
1)LoginDemo.java:
import javax.swing.
java awt components - Java Beginners
java awt components how to make the the button being active at a time..?
ie two or more buttons gets activated by click at a time
AWT code for popUpmenu - Swing AWT
for more information. code for popUpmenu Respected Sir/Madam,
I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button
Look and Feel - Swing AWT
:
Hope...Look and Feel i m using netbeans for creating a swing GUI
swings - Swing AWT
swings how to develope desktop applications using swing Hi Friend,
Please visit the following link:
Here you will get lot of swing applications
provide code - Swing AWT
GAME.....using swings,awt concepts
Hi friend,
import java.awt....);
}
}
-------------------------------------
visit for more information.
Thanks
SWINGS - Swing AWT
more information,Examples and Tutorials on Swing,AWT visit to :
SWT_AWT bridge ,jtextfield edit problem - Swing AWT
SWT_AWT bridge ,jtextfield edit problem Hi All,
I am using SWT_AWT...(display);
shell.setText("Using Swing and AWT");
shell.setSize(350... visit to :
Thanks
scrolling a drawing..... - Swing AWT
information. a drawing..... I am using a canvas along with other components like JTable over a frame
the drawing which i am going to show over canvas
Java logical error - Swing AWT
Java logical error Subject:Buttons not displaying over image
Dear Sir/Madam,
I am making a login page using java 1.6 Swings.
I have want to apply picture in the backgoud of the frame and want to place buttons
java
java how we can create the purches order by using the java awt
java
java how we can create purches order format by using the java awt
Event handling in Java AWT
Event handling in Java AWT
... events in java awt. Here, this
is done through the java.awt.*; package
of java. Events are the integral
part of the java platform. You can see the concepts
reading data from excel file and plotting graph
excel to plot on an axis in graph... using NetBeans in which i have to take input an excel file and then using the data in excel file, i have to plot graphs based on CELL ID selected. please help
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/88329 | CC-MAIN-2015-18 | refinedweb | 1,898 | 62.17 |
UPDATE:
Categories: .NET, BizTalk, SOA, WCF/WF
Richard,
Excellent series, the community has certainly benefited from the hardwork!
Cheers, Nick.
Richard, great series of articles!
On your Order and Order Items insert scenario, how would you insert an order id generated by Oracle into the items table (like the order to order items scenario in SQL Server updategram where we use at-identity).
Also, can you select multiple tables you want it to generate the metadata for or do you have to select one at at time?
Thanks,
Thiago Almeida
Hey Thiago,
You mean that the “order id” column is a generated column? Haven’t tried that yet.
As for multiple tables, YES, you can actually choose operations from many tables at once.
Hi Richard,
Yes, I was wondering how it would be possible to insert an order and its order items using only one call to the LOB adapter (in the same transaction) and having the auto generated column be the link between the two tables.
I’ll have to start playing around with these adapters soon!
Thanks, Thiago Almeida
I’ve got a question about the completion of Scenario 3. I’m having trouble creating the enveloping, even after reading your posting about the SQL adapter.
Any advice or examples you know of?
Hi Matt,
I don’t think there was an envelope in Scenario #3. You’re talking about the article this post references, the one using the Oracle adapter, right?
Silly me, this is what happens when I’ve got 4 windows with your blog on them open…
Anyhow, yes… I was asking about the oracle adapter post where you stopped short of making it “enterprise ready.”
If you want to delete these comments, I’ll repost on the correct posting.
I’ll take it as a compliment that you have 4 blog posts open at once 😉
Yes, I see the text that you are referring to. Have you made the envelope schema yet? Did you take the original schema, mark it as an envelope, and then set the body xpath? Then you’d need to create a copy of that schema, rip out everything but the innermost POLLINGSTMTRECORD. Hopefully then, the envelope would debatch and you’d end up with that secondary schema type for each output message.
Very similar to the solution I finally worked out:
1. Set the generated schema to “Envelope=yes”
2. Set the XPath.
3. Set the root node to POLLINGSTMT (neccessary, or the message doesn’t route). I also set the Attribute FormDefault to Unqualified like you did in your SQL example.
4. Set max occurs to 1 on POLLINGSTMTRECORD under POLLINGSTMT
5. DELETE the POLLINGSTMT record (high level, up with the POLLINGSTMT) in the generated schema. This conflicts with the new schema created next.
6. Create a new schema and “include” the generated schema. Note: You must copy the namespace of the original schema.
7. Create a new record in the schame called “POLLINGSTMTRECORD”
8. Set this new record as the root reference and it’s datatype to POLLINGSTMTRECORD.
Now the xml pipeline will debatch successfully.
I’ve had comments on forum posts that I should be careful setting the root reference, but it was the ONLY way I could get the error message that a schema for “POLLINGSTMTRECORD” was not found that aligned this solution with your SQL post.
suggestions / Feedback?
Hi,
Please can you let me know which schemas I should set for
Envelope and Documents schemas for xml dissambler.
Thanks
Murari
Murari, you referring to the debatching of the database content? If so, then the POLLINGSTMT root schema would be marked as an envelope with another schema needed which held the POLLINGSTMTRECORD node which would get debatched. Hopefully similar to DB-debatching posts I’ve done in the past …
Richard,
Can you please answer Thiago’s question on multiple tables inserts within the same transaction using WCF-SQL adapter?
Thanks,
Durga
Durga,
Haven’t tried it yet with taking the auto-generated ID returned from one table and making that a key in the second table while keeping it all under the same transaction.
I searched so many applications but i did not find anything,luckily i got this links It is very useful to me.thanks to richards.
Links are not opening allow to open the site.
Hi there, see the note at the top where I’ve mentioned that these articles are now hosted on this blog (). | https://seroter.wordpress.com/2008/05/08/article-series-on-biztalk-and-wcf-part-ix-biztalk-adapter-pack-biztalk-patterns/?replytocom=6535 | CC-MAIN-2019-26 | refinedweb | 744 | 71.85 |
Displaying running background threads on the UIcsh Jul 26, 2013 2:35 PM
Hi,
sometimes I have the situation, that a user action needs some time to perform. So I put that work in a Task or Service.
I want to inform the user, that his or her action is still being performed. A simple ProgressIndicator is enough.
I know how to do it with a single task, but theoretically there could be many tasks running, even some which were not triggered by user action.
So I want a similar thing as in Eclipse or IntelliJ: A "background tasks" control, which displays all running threads.
Is there an easy way to do this, I mean from the Thread/Task/Service perspective, not the UI.
Do I need to register each thread I start in that control, e.g. when it is scheduled?
Should I derive Task to do this logic?
Is there a way to "watch" all threads?
What is the best approach?
My idea was to add each Task to the control. And the control listens to the running property. When it has finished it is removed from the controls observable list.
But what about simple Threads, which might be started somewhere outside the JavaFX Application Thread?
1. Re: Displaying running background threads on the UIJames_D Jul 27, 2013 1:37 AM (in response to csh)
For the more general part of the question (Threads which may be started outside of the JavaFX Application Thread):
I've never worked with this part of the Thread API, but Threads belong to ThreadGroups. You can retrieve the group to which a Thread belongs with Thread.getThreadGroup(). ThreadGroups are hierarchical; every ThreadGroup except the top level ThreadGroup has a parent ThreadGroup (ThreadGroup.getParent()) and you can get references to all the child ThreadGroups with ThreadGroup.enumerate(ThreadGroup[]). Thread also has a getState() method which returns a Thread.State enum instance.
Of course, none of this is observable in the JavaFX properties sense, so you would likely have to periodically poll this rather than be able to passively listen for state changes.
For the easier part, some ideas:
I have done one thing which is conceptually similar to what you're trying to do. I have one client-server type application in which the data model communicates with a Remote EJB service layer. All calls to the service layer are threaded. The model exposes an IntegerProperty representing the number of Tasks which are pending (either waiting to be executed or in execution). Most calls to the model result in the creation of a Task whose call() method executes on the remote service. The pertinent parts of the model look like this:
public class Model { private final ExecutorService executorService ; private final IntegerProperty pendingTasks ; // ... public Model() throws NamingException { // ... executorService = Executors.newSingleThreadExecutor() ; pendingTasks = new SimpleIntegerProperty(0); } private void addListenersToTask(final Task<?> task) { task.stateProperty().addListener(new ChangeListener<State>() { @Override public void changed(ObservableValue<? extends State> observable, State oldValue, State newValue) { State state = observable.getValue(); if (state == State.SUCCEEDED) { decrementPendingTasks(); task.stateProperty().removeListener(this); } else if (state == State.FAILED) { task.stateProperty().removeListener(this); try { context.close(); } catch (NamingException exc) { System.out.println("Warning: could not close context"); } status.set(ConnectionStatus.DISCONNECTED); } } }); } private void scheduleTask(Task<?> task) { addListenersToTask(task); incrementPendingTasks(); executorService.submit(task); } private void decrementPendingTasks() { if (! Platform.isFxApplicationThread()) { throw new IllegalStateException("Must be on FX Application thread"); } if (pendingTasks.get()<=0) { throw new IllegalStateException("Trying to decrement non-positive number of pending tasks"); } pendingTasks.set(pendingTasks.get()-1); } private void incrementPendingTasks() { if (! Platform.isFxApplicationThread()) { throw new IllegalStateException("Must be on FX Application thread"); } pendingTasks.set(pendingTasks.get()+1); } public int getNumberOfPendingTasks() { return pendingTasks.get() ; } public ReadOnlyIntegerProperty numberOfPendingTasksProperty() { return pendingTasks ; } // ... }
All tasks are scheduled for execution by calling the scheduleTask(...) method above, which increments the pendingTasks property and adds a listener to decrement it when it completes. The only way for Tasks to fail in this application is if the remote service fails (no other exceptions are thrown by the tasks), which is why I close the naming context on failure, and there's currently no functionality to cancel the tasks. The code will need modifying to deal with cancelation or failure if this is not the case.
Obviously you could do more than just a simple counter here using similar techniques. For example, you could keep an ObservableList<Task> for the tasks, add a task to that list when it is submitted, and remove it when it completes. Then you could provide that list to a control which could display the state of each task.
Another option might be to write a wrapper for ExecutorService which implemented this functionality, rather than implementing it in your model. (I can elaborate a bit if this is not clear, but I haven't actually tried this myself.) If you somehow exposed this ExecutorService wrapper to the whole application and used it for all your threading (whether invoked from the FXApplication Thread or not) then you could at least be aware of those threads, and should be able to monitor them relatively easily. Any threads which were not executed through your ExecutorService wrapper would still be difficult to monitor.
Not sure how much help that is.
2. Re: Displaying running background threads on the UIjsmith Jul 27, 2013 5:46 PM (in response to csh)
Have a look at:
and see if any of the solutions there help.
3. Re: Displaying running background threads on the UIcsh Aug 19, 2013 4:36 PM (in response to jsmith)
Thanks for your answers. I think James_D really got my point and suggested the same solution as I had in my mind.
The wrapper class which holds an ExecutorService, where you can submit tasks sounds good.
jsmith's link is also very revealing.
Unfortunately I haven't yet worked much with ExecutorService. | https://community.oracle.com/message/11152429 | CC-MAIN-2017-26 | refinedweb | 968 | 57.57 |
Role Links and Service Link Roles
A role is a collection of port types that either uses a service or implements a service. A role represents the type of interaction that a party can have with one or many orchestrations. Roles provide flexibility and ease of management as the number of parties increase..
A role link type is a property that characterizes the relationship between two services or orchestrations. It defines the part played by each of the services in the relationship and specifies the port types provided by each role.
A party, or organizational unit, represents an entity outside of BizTalk Server that interacts with an orchestration. In BizTalk Server, each organization with which you exchange messages is represented by a party. You can define how the party interacts by enlisting it in a role.
You can deploy or remove a role link type when it is associated with an orchestration.
Orchestrations and Roles
When you deploy an orchestration that uses a role link type, the Configuration database saves the role. Because a role can be used by more than one orchestration, the Management database saves only one copy of the role link type.
If your BizTalk project contains two role link types in separate orchestration (.odx) files that have the same name and namespace, your BizTalk project does not compile.
Orchestration Removals that use Roles
Because a role link type can be used by more than one orchestration, when you undeploy an assembly that contains an orchestration that uses a role, the Management database removes the role only if no other orchestrations are using the role.
Additionally, the Management database removes a role only if there are no parties enlisted with it. Just as you cannot overwrite a role that has parties enlisted with it, you cannot remove a role that has parties enlisted with it.
See Also
Using Role Links in Orchestrations
Artifacts | https://docs.microsoft.com/en-us/biztalk/core/role-links-and-service-link-roles?redirectedfrom=MSDN | CC-MAIN-2019-39 | refinedweb | 316 | 52.6 |
A set of utilities for testing matplotlib plots in an object-oriented manner.
A set of utilities for checking and grading matplotlib plots. Please note that ``plotchecker`` is only compatible with Python 3, and not legacy Python 2. Documentation is available on Read The Docs.
The inspiration for this library comes from including plotting exercises in programming assignments. Often, there are multiple possible ways to solve a problem; for example, if students are asked to create a “scatter plot”, the following are all valid methods of doing so:
# Method 1 plt.plot(x, y, 'o') # Method 2 plt.scatter(x, y) # Method 3 for i in range(len(x)): plt.plot(x[i], y[i], 'o') # Method 4 for i in range(len(x)): plt.scatter(x[i], y[i])
Unfortunately, each of the above approaches also creates a different underlying representation of the data in matplotlib. Method 1 creates a single Line object; Method 2 creates a single Collection; Method 3 creates n Line objects, where n is the number of points; and Method 4 creates n Collection objects. Testing for all of these different edge cases is a huge burden on instructors.
While some of the above options are certainly better than others in terms of simplicity and performance, it doesn’t seem quite fair to ask students to create their plots in a very specific way when all we’ve asked them for is a scatter plot. If they look pretty much identical visually, why isn’t it a valid approach?
Enter plotchecker, which aims to abstract away from these differences and expose a simple interface for instructors to check students’ plots. All that is necessary is access to the Axes object, and then you can write a common set of tests for plots independent of how they were created.
from plotchecker import ScatterPlotChecker axis = plt.gca() pc = ScatterPlotChecker(axis) pc.assert_x_data_equal(x) pc.assert_y_data_equal(y) ...
Please see the Examples.ipynb notebook for futher examples on how plotchecker can be used.
Caveats: there are many ways that plots can be created in matplotlib. plotchecker almost certainly misses some of the edge cases. If you find any, please submit a bug report (or even better, a PR!).
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/plotchecker/ | CC-MAIN-2017-47 | refinedweb | 390 | 65.22 |
- Data Conversion
- Datagrids and images
- getting the value from ~ at run time
- String Manipulation
- Parser Error Message: Ambiguous match found.
- Meaning of <%#
- timeout
- SQL Query
- how to add a Scroll Bar in CheckBoxList control in ASP.NET
- What C# book?
- Exporting
- validating portions of asp.net page.
- Response.CharSet Vs Response.ContentEncoding Vs responseEncoding property in web.Config
- hiding browser toolbar and menubar
- More Examples like this? (Common Database Functions)
- opulating Unicode characters into dropdownlist
- asp.net form name in .Net 1.1
- Windows 2003 Server Variables
- Form layout
- how do I access the application object within another assembly?
- Datareader
- switch(Request.QueryString("handler")) in .Net?
- migrating asp to aspl.net: includes?
- Is it possible to share functions across aspx.vb pages?
- Performance of Xsl Transformations
- SmtpMail.Send - Class not registered
- Session - ASP and ASP.NET
- Open/Close Database Connection for each page
- Cold Fusion to ASP converter
- Server access to files on the LAN
- problems with public properties
- Losing Session Variable
- How to draw a transparent gif in web
- resources by customer (not culture)
- Application("AppPath")
- 'current custom error settings for application prevent details of application error from being viewed. '
- Error system.io.exception
- impersonating and LogonUser
- Inherited Page Always Calls Base Page Load Event
- How to implement Web Popup windows using User Interface Process Application Block
- DataGrid problem
- Fonts installed on the web client?
- running aspnet_wp.exe under a different user
- impersonate and visio
- Database not Found
- Panel position
- dynamic creation of user controls
- setting the row of web datagrid
- Could not load type after copy/paste .ascx files into a directory set as application in IIS
- New request, new thread not always ?
- Frames
- How to dispaly corret URL in browser after Server.transfer
- How to dispaly corret URL in browser after Server.transfer
- determine the mime/ContentType of a binary stream?
- LostFocus Event
- datagrid highlight
- running ASP.NET under a domain user
- Response.AddHeader corrupts Japanese/chinese characters when writing outputstrea
- How to create tabs?
- which best software for this?
- sending asp.net smtp sendMail of UTF-8 unicode contents
- Application level trace
- Access to the path is denied
- How to check the class variable is "new"(to object) or not?
- TreeView
- Passing long text from Client to Server
- highlighting row
- webform data never changes
- Web FARM
- Web Server running ASP.Net 1.0 error
- How to set focus to a control
- asp.net total number of sessions open for an application
- CSP Could Not Be Found
- asp.net automatically perform action based on a date
- Stupid error - Radiobox list...
- table into another dataset
- how do I popup a window w/o using client-side script
- Suggestions on Learning ASP.net
- Debugger not stopping at break points in VS2002?
- Searching by DATE problem
- Framesets - server side control?
- Web COM+ interop results in out of memory errors
- MSDE Authentication
- prompt to save changes in datagrid in editmode if user tries to navigate away
- iis 5 virtual directory on samba share
- Request.QueryString and Frames
- GAC Question
- How to make FileSystemWatcher Wait for Completion
- add <body onunload="blah();">
- GAC / Windows Installer
- code behind again
- Weirdness with HTTPModule, HTTPHandler, IIS, Mappings
- code behind
- Logical "or" with RegularExpressions
- How can I achieve Rowspan capabilities using the ASP:Repeater control
- project files
- Using a Stream to Download Excel Files
- File Open Dialog box
- Using FileStream object to read JPEG File from Https location.
- Anyone see the problem with this line????
- Request.Querystring does not return extended characters
- Intermittent loss of session variables and cookie contents
- IIS and Norton
- Timeout on Xsl Transformation
- Problem maintaining dataset in session variable
- Server.transfer and public variables
- Div element and value
- RadioButtonList Databinding problem.
- Input mask on a text box?
- memoryLimit limations
- Access Denied
- Quest: SQL <-- --> ASP.NET DateTime capatibities...
- ViewState Questions
- Could not load type: 'HelloWorld2.Global'
- javascript in datagrid
- Textbox has "focus"
- download an XML file Using fileStream
- ASP.NET process identity does not have read permissions to the global assembly cache
- 12 tables on aspx page: A performance hit?
- Session_OnEnd is not being called
- asp.net vs asp
- Copy HTML of a table
- Fonts installed on the web client???
- Error handling in application_error, use session variables?
- How to open an RTF file using ASP .NET
- yes no in a datagrid
- ASP.NET process identity does not have read permissions to the GAC
- Request is done twice randomly
- Change Web.config value programatically
- highlighting arow
- Refreshing dropdowns?
- update
- storing`passwords in cookies
- datarow-datgriditem
- Session problem
- CSS doesn't work on dynamicly added user controls
- internationalization problem continues.
- Problem of throwing an exception (System.Net.Sockets.OverlappedAsyncResult::Complet ionPortCallback)
- How to get the display value of button column in DataGrid?
- a class inherited from ArrayList, is saved to ViewState, why the type of the object read from ViewSate is not the class, but the parent, ArrayList
- how to password protect site?
- page width & height
- forms authentication
- license an asp.net server control (DLL)
- Inheriting class with private constructor
- Using javascript return confirm function within asp.net sub
- Bulk email program has problem.
- Sql Server multiple XML result sets to ADO.NET
- Parser Error Message: Could not load type - please HELP
- Retrieving The Application Name Programatically
- form problem
- deploying database and asp.net app on separate/same servers
- Tool to Monitor response/request's
- Registry Tweaks In XP or 2k3Server
- Accessing JavaScript function in codebehind
- Cannot add a Web Reference in Vb.net Version 7
- Crystal Report Picture
- asp:button(IIS 6)
- More problems with IDE and documentation
- license a server control
- Prevent "back" functionality
- Datalist in User Control and Dynamic Load it.
- Reference to files in SQL database Web App
- Passing windows credentials from server to server.
- Sessions lost with new assembly
- Connection String not working
- Invalid object name problem
- Serialization of DataView
- default button and enter
- I keep getting this error. pl help
- About TextMode = Password
- Problem when inserting Data using Arabic or Farsi Language
- Database binding
- Auto mailer
- Using controls across multiple virtual directories
- User control fires a method in webform
- How to retrieve Application variables down the class line?
- Status Page
- rs.AddNew in ado.net
- setting Best Practice ADO.NET connections
- deploying ASP.NET?
- top level marker?
- Javascript closes IE
- Associate Command Button to Javascript Code
- need some help
- How to slowly add content to a web page?
- enableSessionState - still struggling
- web controls will not display
- facing a problem
- web control wont display
- Arrays doesn't save between calls
- Fire PageEvent Manually through HttpHandler
- Edit in DataGrid
- Error Handling with response.write
- question regarding overriding of web.config in the root directory..in a web app in a virtual directory
- Dataset Updation
- Difference between C# and VB.NET
- Difference between C# and VB.NET
- Which is advisable for object creation?
- Object handling in component development using C#
- First asp.net error. Pl help
- Could not access CDO.message object
- How to display nested structures?
- Script Timer
- HttpWebRequest error help please.
- ASP deal with message return from SQL Server
- user agent return on a local machine
- What is the most efficient way to access common fcts on asp.net pages when using user controls?
- Login Fails for SQL Server from Web Service
- ASP.NET with VB.NET sql statement for MS ACCESS
- Initializing webform through PageInit() in HttpModule, Httphandler
- Front Controller / IHttpModule
- calling a function on the parent form from a popup window
- Very wierd ASP.NET and SSL problem.. IE goes gray
- Lisview?
- ASP.NET and right-to-left and bidirectional languages
- what is Matrix?
- 2K Okay, no go on the 2K3 Web Server
- Best Practice for Adding a new row to a DataGrid?
- Repeater and Custom HTML Question
- ASP Hyperlink with absolute NavigateURL? is it possible
- Usage of "Request.UrlReferrer"
- Opening Project Gives An Error
- how to: Deploy a web service
- ViewState general question
- Filtered output disappears after browser back/forward
- help please
- How do I do this.
- Detect if Fonts are installed on web client
- Abstract code from HTML
- Cast from type 'DBNull' to type 'String' is not valid.
- selectedindexchanged and user controls
- ASP.NET and Customer HTML
- Custom Paging
- What framework version am I running?
- ASP and Access
- No data in data grid
- Postback issues
- Security Attributes without Try-Finally?
- Beware of scorpion53061(theft)
- how to password protect site
- Web Forms Toolbox is disabled in HTML view?
- Generate PDF from ASP.NET
- Double 'open' downloading a file in an ASP.NET page
- aspnet_wp.exe Repeatedly stopping and being recycled unexpectedly
- Timers
- Enter Key submition is not raising the command event
- data set- scroll
- New Session Creation
- Execute a string of code as if it were a page
- run a vb.net module in asp.net across WAN
- Help, please
- ASP.NET and site structure
- c#/.NET coding standard?
- Databound Textbox - DataSet not getting updated value
- who know what leftpane in asp.net is ?
- .NET 1.0/1.1, VS 2002/VS 2003
- See no DataList
- Do a postback without updating client's page?
- .Net features for web deployment of applications?
- Popup windows.......
- Change forecolor for individual listitems in CheckBoxList
- WebClient URL after Response.redirect
- update to c# library would not propagate
- Comparison Operators
- Problem with a DataList
- PlaceHolders
- Access Denied!
- XMLReader or XMLDocument ?
- truncate time from date in DataTable.Select()
- Not able to see asp file
- How send parameters to user control
- __doPostBack problem
- Setting MailMessage email address
- asp.net uploads - confused
- read xml
- 2 Simple Questions About Session_Start
- Inheritance Problem (src vs. Codebehind)
- Converting DataSet to object type
- Generating Excel with HTML
- Customize a DataGrid Row
- Help example update sql db using XML
- Session Related Problem ???
- Evaluates a supplied string
- custom user logging tracking
- can't put a url into the value of a key in appSettings (web.config)
- Problem with a DropDownList
- Using cookies to determine the last page visited
- client/server form post
- Page Validation and Submit problems in ASP.NET
- Frames / dll issue
- Page Validation and Submit problems ??
- SessionState
- requested recommendation on data presentation
- Masking DataTextField Output For Display In a ListBox
- Centering controls on a Web Form
- Fade Image Region with GDI+ ?
- HO HO HO UserControls what's the Dealie-O
- smartscroller v1.1
- Running an App in the background
- Visual Studio HTML Designer
- convert date
- Convert date
- virtual directory not being configured as an application error in asp.net app
- Print Web Reports (impersonate)
- Conditional Row in Grid
- How to select text in textboxs by clicking on it?
- System.Web.UI.Page inheritance and viewstate
- proble with web.config
- Cannot run ASP.NET in the browser
- Code in the HTML, bad idea?
- Datagrid Header
- Appplication_Start
- Passing variables problem
- events for dropdownlist found in a datagrid?
- RegisterCards in IE?
- Pass an additionnal HTTP parameter while using Server.Transfer
- Displaying the pdf file
- pdf in the web browser
- request form
- Pass aspx pages to designer
- ASP.NET and site structure
- ConnectionString Initialization
- DataGrid sort
- web.config question
- debugging help
- VB.NET ASP.NET prob
- VB.NET ASP.NET prob
- Accessing form data
- Safari Display Problem
- Old Age Problem: Refresh Button Causes Postback
- BC30205: End of statement expected.
- How to show user default 401 Error site ?
- Send data to another server
- ASP to ASP.Net (Datagrid Question)
- Overload
- Page Load called twice!
- Is there a spell check in ASP.NET?
- Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.
- Dynamically create controls ?
- Problems with Session's variables
- asp.net & Image database type
- Data in a Module
- How to implement MS Agent
- httphandelrs problem
- More ParseControl()
- Anchor controls
- retrieveing entire user agent
- Looking for an example update db through a web service?
- XML TAG UPDATE
- multiple page form
- Dynamic update between web forms
- How to do this?
- Drop down list indexing problem
- string -> base64
- Creating context
- Div tag
- Forcing "Not IsPostBack" code to run on postback if nessessary???
- Cross-Site Scripting...
- ASP.NET prob...(using VB.NET)
- How to do it ?
- Is this a ParseControl bug?
- .NET Bug with BinaryWrite or just bad code?
- Database Connectivity using DSN
- Database Connectivity using DSN
- VS.NET versioning issues
- Change the position of the item in listBox
- Instantiating word in asp.net
- Internal Server Error 500 vb.net
- Protecting Software from Piracy
- Select Item in DropDown in DataGrid?
- DataGrid Add single row/column?
- Attach HTML to Table
- Log4Net help wanted
- Updating dataset in crystal reports
- asp.net to sql server 2000 access problems
- PopUpProblem
- eventbubble
- Err running the application in ASP.net - URGENT
- ASP.NET not running
- Radio Button and groupName bug in ASP.NET
- Textbox is tiny on Mozilla Firebird
- ASP.NET not running
- add dataset values back into database
- Fix the width of a dropdown list?
- problem in session state cookies
- starting ASP.NET
- Session state can only be used when EnableSessionState is set to true... error
- Sample site
- Passing Parameters to Windows Form Control FROM IE
- Session and View State variables
- Moving from Baisc Authentication to Forms Authentication
- Petshop Error
- Creating a downloadable file on the fly
- Frames
- Passing Parameters to Windows Form Control FROM IE
- Learnvisualstudio.net web site
- Change Rows to Clolumns in a Dataset?
- list box in a repeater
- Chinese characters in ASP:DropDownList
- More debuging an line number.
- getting confused and going crazy
- Dynamic Content
- error when running the ASP.NET application
- ParseControl()
- Access to controls in <FooterTemplate>
- How to display the information in popup windows under ASP.NET envoronment?
- Add Event Handler Dynamically, but assigning function name at run-time, not to static function
- XML Menu Problem
- Access database authentication
- TreeView and postback
- ContentType problem - downloading instead of displaying
- Positioning controls on a webform at runtime
- Desperate repost: Reloading page too quickly results in an error
- Is AppSettings cached
- how to sezt textbox-focus?
- using DataBinder in <img>
- Inject Ad Banner in DataList
- ASP.NET to replace ActiveX controls
- Foreign language support in ASP.Net
- Custom Server Controls... good book on this?
- Test cookie
- show data from MS access in data grid using ASP.Net
- Newsgroup Web Interface
- File Access
- CacheDependency
- fill dataset from reding a delimited text file
- getting those built in error pages
- Cache expiry?
- Configuration Error
- EnableViewstate problem
- problem in web application
- Image uploading
- Why does webform cannot get data?
- Reading Blobs From Oracle
- Responding to Command button clicks in Page_Load
- How much memory SHOULD user pc need running asp.net app?
- Application_Error not emailing at global.asax
- Identity impersonate="true" hoses app on just dev pc?
- Duplicate ASP web project
- OKAY NOW I AM PIST! ListBox x 2. and Response.Redirect
- Front Controller design pattern and HTTP handler
- "unable to get the project file from the web server"
- Passing session data across applications.
- erro E_NOINTERFACE(0x80004002).
- TagPrefix?
- question about the IE BACK Button
- Enable/Disable RequiredFieldValidator
- HttpWebRequest.GetResponse() throwing Timeout
- script timeout
- repeater control
- Image in ASP.Net w/ SSL
- Portal Starter Kit authentication
- Back Button and Dyanmic Control
- debugging on xp
- how to package ActiveX .dll user w/ dependencies in web page?
- Access is denied
- Beginner ?
- Moving ASP.NET Visual Studio 2003 project to a different server
- Portal Starter Kit - Admin Tab Not Showing
- ASP.NET development issue.
- ASP.NET File Upload
- ASP.NET File Upload
- Problem with ASP.NET Delegation and Impersonation
- What is FIRST event on Page to Calculate Session Variables?
- Holding Page for when Application Domain dies?
- Download Page as Attachment
- right click
- using an external function in asp.net page
- how to override RadioButtonList render?
- Data Exchange between Applications using .NET
- Crystal Reports. Changing dataset schema
- Crystal Reports: Error: Fail to render the page
- Speech Application
- Can't call Stored Proc... How to trace Flow of Web Application in VB.NET?
- machine.config changes - restart needed?
- Help displaying printer friendly DataGrid in new window.
- Open program from ASP
- Parser Error
- defaultRedirect
- How can I do text1.text=dropdownlist1.DataTextField on "dropdownlist1.MouseOver"?
- OutputStream - save file then - redirect
- Joseph Greer
- Anchor Bay Camp
- Anchor Bay Camp / Joe Greer
- OutputStream the redirect
- HTTPostedFile & Word Automation
- Error while trying to run project (ASP.NET Application)
- Visio and .NET
- Redirecting Command in Front Controller
- Different cultures in same WEB App.
- COM object accessing asp Application object
- why does refresh on browser break this code?
- ViewState problem
- Disable Linkbutton Before Postback?
- Problem creating web user control
- Event for dropdown control created at runtime.URGENT!
- RadioButtonList created in the page load event
- Redirect and POST
- Sharing Session data across two projects
- Accessing HTML Control in ASCX through JavaScript
- Socket.Connect() timeout
- kkkkkkkkkkkk
- Tag order
- FindBy<> method in Typed DataSet.
- form size with different client resolutions
- Synchronizing Code Behind
- running application in asp.net process.
- Dropdown
- What is IUSR equivalent for IIS 6.0?
- security issue
- Testing form values
- Assinging dynamic content from file to textbox
- using linked .js file in UserControl
- form that runs at server and client?
- Could not load type "namespace.class" Parser Error
- Failed to get page - crystalreportviewer
- Code the WorksheetOptions on export to Excel
- Index was out of range?
- how to package ActiveX .dll user control in web page
- Alternatives to IIS
- Issues with handing server events...
- Clearcase with IIS 5.1 and ASP.NET?
- detect version of asp.net mappings
- Remove datagrid col at runtime?
- create Web user control programmically
- Is IsNull supported in ASP.NET?
- open explorer,run aspx with passed string
- Share user control across applications
- Timing error on Page Load with User Control
- Produce a Dataset from a Grid
- Include form info in Response.Redirect
- Web Services Enhancements 2.0
- Inheriting System.UI.ControlCollection for a WebControl - FindControl doesn't work???
- Cursor on the Username text box
- Trace.axd output log question.
- Trouble getting row count from DataReader
- globals in unmanaged space
- document completed parsing
- Scroll Windows
- JScript
- Classes without compiling a dll
- Crystyal Reports
- About ArrayList
- .NET Timers aborting in web application
- abstract and base class
- Client IP
- Open new with new session | https://bytes.com/sitemap/f-329-p-163.html | CC-MAIN-2019-43 | refinedweb | 2,950 | 50.84 |
Go language features
While TinyGo supports a big subset of the Go language, not everything is supported yet.
Here is a list of features that are supported:
- The subset of Go that directly translates to C is well supported. This includes all basic types and all regular control flow (including
switch).
- Slices are well supported.
- Interfaces are quite stable and should work well in almost all cases. Type switches and type asserts are also supported, as well as calling methods on interfaces. The only exception is comparing two interface values (but comparing against
nilworks).
- Closures and bound methods are supported, for example inline anonymous (lambda-like) functions.
- The
deferkeyword is almost entirely supported, with the exception of deferring some builtin functions.
Concurrency
At the time of writing (2019-11-27), support for goroutines and channels works for the most part. Support for concurrency on ARM microcontrollers is complete but may have some edge cases that don’t work. Support for other platforms (such as WebAssembly) is a bit more limited: calling a blocking function may for example allocate heap memory.
Cgo
While TinyGo embeds the Clang compiler to parse
import "C" blocks, many features of Cgo are still unsupported. For example,
#cgo statements are only partially supported.
Reflection
Many packages, especially in the standard library, rely on reflection to work. The
reflect package has been re-implemented in TinyGo and most common types like numbers, strings, and structs are supported now.
Maps.
Standard library
Due to the above missing pieces and because parts of the standard library depend on the particular compiler/runtime in use, many packages do not yet compile. See the list of compiling packages here (but note that “compiling” does not imply that works entirely).
Garbage collection
While not directly a language feature (the Go spec doesn’t mention it), garbage collection is important for most Go programs to make sure their memory usage stays in reasonable bounds.
Garbage collection is currently supported on all platforms, although it works best on 32-bit chips. A simple conservative mark-sweep collector is used that will trigger a collection cycle when the heap runs out (that is fixed at compile time) or when requested manually using
runtime.GC(). Some other collector designs are used for other targets, TinyGo will automatically pick a good GC for a given target.
Careful design may avoid memory allocations in main loops. You may want to compile with
-print-allocs=. to find out where allocations happen and why they happen. For more information, see heap allocation.
A note on the
recover builtin
The
recover builtin is not yet supported. Instead, a
panic will always terminate a program and
recover simply returns nil.
This is a deviation from the Go spec but so far works well in practice. While there are no immediate plans to implement
recover, if it can be shown to be necessary for compatibility it will be implemented. Please note that this comes at a cost: it means that every
defer call will need some extra memory (both code and stack), so this feature is not free. It might also be architecture dependent. If it gets implemented, it will likely be opt-in to not increase code size for existing projects. | https://tinygo.org/docs/reference/lang-support/ | CC-MAIN-2022-40 | refinedweb | 539 | 55.24 |
.
Because the “…” part of the parameters have no name, a special set of macros contained in stdarg.h gives the function access to these arguments. Earlier versions of such functions had to use similar macros contained in varargs.h.
Assume that the function you want to write is an error handler called errmsg() that returns void, and whose only fixed parameter is an int that specifies details about the error message. This parameter can be followed by a file name, a line number, or both. These items are followed by format and arguments, similar to those of printf(), that specify the text of the error message.
For this example to compile with earlier compilers requires extensive use of the macro __STDC__, which is defined only for ISO C compilers. The function’s declaration in the appropriate header file is:
#ifdef __STDC__ void errmsg(int code, ...); #else void errmsg(); #endif
The file that contains the definition of errmsg() is where the old and new styles can get complex. First, the header to include depends on the compilation system:
#ifdef __STDC__ #include <stdarg.h> #else #include <varargs.h> #endif #include <stdio.h>
stdio.h is included because we call fprintf() and vfprintf() later.
Next comes the definition for the function. The identifiers va_alist and va_dcl are part of the old-style varargs.h interface.
void #ifdef __STDC__ errmsg(int code, ...) #else errmsg(va_alist) va_dcl /* Note: no semicolon! */ #endif { /* more detail below */ }
Because the old-style variable argument mechanism did not allow you to specify any fixed parameters, they must be accessed before the varying portion. Also, due to the lack of a name for the “…” part of the parameters, the new va_start() macro has a second argument, which is the name of the parameter that comes just before the “…” terminator.
As an extension, Oracle Developer Studio ISO C allows functions to be declared and defined with no fixed parameters, as in:
int f(...);
For such functions, va_start() should be invoked with an empty second argument, for example:
va_start(ap,)
The following example is the body of the function:
{ va_list ap; char *fmt; #ifdef __STDC__ va_start(ap, code); #else int code; va_start(ap); /* extract the fixed argument */ code = va_arg(ap, int); #endif if (code & FILENAME) (void)fprintf(stderr, "\"%s\": ", va_arg(ap, char *)); if (code & LINENUMBER) (void)fprintf(stderr, "%d: ", va_arg(ap, int)); if (code & WARNING) (void)fputs("warning: ", stderr); fmt = va_arg(ap, char *); (void)vfprintf(stderr, fmt, ap); va_end(ap); }
Both the va_arg() and va_end() macros work the same for the old-style and ISO C versions. Because va_arg() changes the value of ap, the call to vfprintf() cannot be:
(void)vfprintf(stderr, va_arg(ap, char *), ap);
The definitions for the macros FILENAME, LINENUMBER, and WARNING are presumably contained in the same header as the declaration of errmsg().
A sample call to errmsg() could be:
errmsg(FILENAME, "<command line>", "cannot open: %s\n", argv[optind]); | https://docs.oracle.com/cd/E77782_01/html/E77788/bjaju.html | CC-MAIN-2021-21 | refinedweb | 482 | 51.78 |
Hi there,
My question is a short one. I've been looking for a reference on a Canvas selectable button. The button itself does not call a function when clicked, it should just wait to see if it has been highlighted.
I have a script that was put directly on the button itself but I do not know how to write an "if statement" on "if this button is selected then.." in C#.
and if i want to know when its not highlighted?
Answer by DoTA_KAMIKADzE
·
Apr 20, 2015 at 12:07 AM
Add script containing this code to your Button:
using UnityEngine;
using UnityEngine.Events;
using UnityEngine.EventSystems;
public class YourScript : MonoBehaviour, IPointerEnterHandler, ISelectHandler
{
public void OnPointerEnter(PointerEventData eventData)
{
//do your stuff when highlighted
}
public void OnSelect(BaseEventData eventData)
{
//do your stuff when selected
}
}
Thank you very much it seemed to work using the mouse.. Though I do have one issue.
When I select the button using the navigation on the keyboard, it does not read as being highlighted because a pointer is not being used.
How would I go about reading if it is highlighted even without pointer?
Updated my answer, check it out.
Use OnPointerExit or OnDeselect for that. The EventTrigger article lists the various events you can intercept.
Answer by sisse008
·
Apr 29, 2018 at 01:04 PM
cant find any new info on this topic.
OnPointerEnter() only works with the mouse.
How can I check if a button is highlighted while using keyboard navigation?.
Rect to RectTransform on overlay Canvas?
1
Answer
Event trigger for instantiated objects
0
Answers
Canvas GUI Button problem walking
1
Answer
creating a label from a button click
1
Answer
How to make button stay the same size on all screens?
2
Answers | https://answers.unity.com/questions/950500/if-button-highlighted.html?sort=oldest | CC-MAIN-2020-29 | refinedweb | 293 | 64.61 |
How to Read a File from Line 2 or Skip the Header Row?
In this article, we will learn how one can read a file from the second line in Python. We will use some built-in functions, some simple approaches, and some custom codes as well to better understand the topic.
Python handles various file operations. In the case of reading files, the user can start reading a file either from the first-line or from the second line. This article will show how you can skip the header row or the first line and start reading a file from line 2. Let us discuss four different methods to read a file from line 2. We will read a sample.txt file as well as a sample.csv file.
Sample Text File //sample.txt
Student Details of Class X David, 18, Science Amy, 19, Commerce Frank, 19, Commerce Mark, 18, Arts John, 18, Science
Sample CSV File //sample.csv
Student Details of Class X David 18 Science Amy 19 Commerce Frank 19 Commerce Mark 18 Arts John 18 Science
Now, let us look at four different ways to read a text file and a csv file from line 2 in Python. We will use the above files to read the contents.
Example: Read the Text File from Line 2 using next()
We use the sample.txt file to read the contents. This method uses next() to skip the header and starts reading the file from line 2.
Note: If you want to print the header later, instead of next(f) use
f.readline() and store it as a variable or use
header_line = next(f). This shows that the header of the file is stored in next().
#opens the file with open("sample.txt") as f: #start reading from line 2 next(f) for line in f: print(line) #closes the file f.close()
David, 18, Science
Amy, 19, Commerce
Frank, 19, Commerce
Mark, 18, Arts
John, 18, Science
Example: Read the Text File from Line 2 using readlines()
We use the sample.txt file to read the contents. This method uses
readlines() to skip the header and starts reading the file from line 2.
readlines() uses the slicing technique. As you can see in the below example,
readlines[1:] , it denotes that the reading of the file starts from index 1 as it skips the index 0. This is a much more powerful solution as it generalizes to any line. The drawback of this method is that it works fine for small files but can create problems for large files. Also, it uses unnecessary space because slice builds a copy of the contents.
#opens the file f = open("sample.txt",'r') #skips the header lines = f.readlines()[1:] print(lines) #closes the file f.close()
['David, 18, Science\n', 'Amy, 19, Commerce\n', 'Frank, 19, Commerce\n', 'Mark, 18, Arts\n', 'John, 18, Science']
Example: Read the Text File from Line 2 using islice()
We use the sample.txt file to read the contents. This method imports
islice from
itertools module in Python.
islice() takes three arguments. The first argument is the file to read the data, the second is the position from where the reading of the file will start and the third argument is None which represents the step. This is an efficient and pythonic way of solving the problem and can be extended to an arbitrary number of header lines. This even works for in-memory uploaded files while iterating over file objects.
from itertools import islice #opens the file with open("sample.txt") as f: for line in islice(f, 1, None): print(line) #closes the file f.close()
David, 18, Science
Amy, 19, Commerce
Frank, 19, Commerce
Mark, 18, Arts
John, 18, Science
Example: Read the CSV File from Line 2
We use the sample.csv file to read the contents. This method reads the file from line 2 using
csv.reader that skips the header using
next() and prints the rows from line 2. This method can also be useful while reading the content of multiple CSV files.
import csv #opens the file with open("sample.csv", 'r') as r: next(r) #skip headers rr = csv.reader(r) for row in rr: print(row)
['David', '18', 'Science']
['Amy', '19', 'Commerce']
['Frank', '19', 'Commerce']
['Mark', '18', 'Arts']
['John', '18', 'Science']
Conclusion
In this article, we learned to read file contents from line 2 by using several built-in functions such as
readlines(),
islice(),
csv.reader() and different examples to skip the header line from the given files. | https://www.studytonight.com/python-howtos/how-to-read-a-file-from-line-2-or-skip-the-header-row | CC-MAIN-2022-21 | refinedweb | 763 | 82.24 |
I suggest simplying your entire code to about the same length as you have posted. You don't show us any non-virtual classes, and you don't show how AlphaBeta is constructed.
It looks like you are reading in all of the values, then only writing out the matrix once. What you have looks good, but I would suggest making it into a function, and using variable names that are descriptive:
void PrintMatrix(int matrix[10][10]; const int numberOfRows, const int numberOfColumns)
{
for(int row = 0; row < numberOfRows; row++)
{
cout<<endl;
for(int column = 0; column < numberOfColumns; column++)
{
cout<<a[i][j]<<" ";
}
}
}
QLabel::QLabel(const QLabel&) is called the copy constructor, and it is indeed private, so you cannot use it. You are trying to use it by copying your object when you pass it to the function. You should accept a const reference to the object instead:
void Test(const QLabel& yourLabel)
David
You are explicitly asking the user for the dimensions, so what is the problem? You have to assume either row-major or column-major (i.e. for:
1 2
3 4
row major ordering would be (1,2,3,4) where column major would be (1,3,2,4) ), but once you specify how you expect the input there is no reason to need line break characters, etc.
Also, please please please change variable names like 'd', and 'e' to "numberOfRows", etc.
Please include the entire compiler output. Which identifier is not found? My guess would be read_file, since you didn't declare it before main.
For starters, I would suggest never using spaces in a file name :)
If you insist on doing so, I think Windows likes backslashes instead of forward slashes - I don't if c++ cares about this.
Also, you may need to "escape" the space with a backslash:
read_file("F:/2nd\ year_2/Data structure 2/books/lecture-26.pdf");
You'll have to provide more information. Do you really need to store all 1000 of these 3000x3000 matrices? If so, then you're just out of luck :). Give us an explanation of the algorithm.
Yea if its getting to 95% right before it crashes that is definitely an "out of memory".
I think you should verify that it is indeed an "out of memory" before we look into it too far. I can't give you exact instructions because I'm on linux at the moment, but I believe if you open the task manager there should be some plots/statistics about cpu/memory usage. I'd leave that open while you run the program and if it goes up and up and up and gets to the top and then crashes then you've found the problem :)
Are you deleting the memory you are allocating with malloc? Depending on how big these matrices are, maybe you are running out of memory? Do you need to keep each of these 4000 matrices? Or just use it and then discard it?
Before messing around with a printBoard() function, you should make some examples for yourself to make sure you understand the basics. Create a string and output it. Does it look correct?
Before messing around with tictactoe, you should make some examples for yourself to make sure you understand the basics. Create a string and output it. Does it look correct?
Stringstream works just like "cin" (an input stream). Here is a demo:
I think your question would get better responses here:
It looks like you missed copying/pasting the critical line to answer this question! (The function declaration is missing!)
You should use more descriptive names. What is 'sz'? What does 'calc' do?
You should also post the shortest compilable example you can make that demonstrates the problem.
I suggest creating a function called OutputBinary(int lengthOfNumbers) and testing it by hardcoding a call to OutputBinary(3). This way you will be able to tell if the problem is with your output of if it is with your user input.
Sreekiran,
As a small side note, please use real English words like "please" instead of "plz" - it helps keep Daniweb looking professional!
I would start by switching from char arrays to std::string and from strcmp to .compare -
I think you're missing two semicolons:
#include <iostream>
class Hop
{
protected:
static struct NStrct
{
int nCount;
} test;
};
int main()
{
return 0;
}
If you want to use Qt it look like QConsole should do it: | http://www.daniweb.com/members/274878/daviddoria/posts/solved | CC-MAIN-2013-20 | refinedweb | 743 | 71.95 |
Android 10 adds support for stable Android Interface Definition Language (AIDL), a new way to keep track of the application program interface (API)/application binary interface (ABI) provided by AIDL interfaces. Stable AIDL has the following key differences from AIDL:
- Interfaces are defined in the build system with
aidl_interfaces.
- Interfaces can contain only structured data. Parcelables representing the desired types are automatically created based on their AIDL definition and are automatically marshalled and unmarshalled.
- Interfaces can be declared as stable (backwards-compatible). When this happens, their API is tracked and versioned in a file next to the AIDL interface.
Defining an AIDL interface
A definition of
aidl_interface looks like this:
aidl_interface { name: "my-aidl", srcs: ["srcs/aidl/**/*.aidl"], local_include_dir: "srcs/aidl", imports: ["other-aidl"], versions: ["1", "2"], stability: "vintf", backend: { java: { enabled: true, platform_apis: true, }, cpp: { enabled: true, }, ndk: { enabled: true, }, }, }
name: The name of the AIDL interface module that uniquely identies an AIDL interface.
srcs: The list of AIDL source files that compose the interface. The path for an AIDL type
Foodefined in a package
com.acmeshould be at
<base_path>/com/acme/Foo.aidl, where
<base_path>could be any directory related to the directory where
Android.bpis. In the example above,
<base_path>is
srcs/aidl.
local_include_dir: The path from where the package name starts. It corresponds to
<base_path>explained above.
imports: A list of
aidl_interfacemodules that this uses. If one of your AIDL interfaces uses an interface or a parcelable from another
aidl_interface, put its name here. This can be the name by itself, to refer to the latest version, or the name with the version suffix (such as
-V1) to refer to a specific version. Specifying a version has been supported since Android 12
versions: The previous versions of the interface that are frozen under
api_dir, Starting in Android 11, the
versionsare frozen under
aidl_api/name. If there are no frozen versions of an interface, this shouldn't be specified, and there won't be compatibility checks.
stability: The optional flag for the stability promise of this interface. Currently only supports
"vintf". If this is unset, this corresponds to an interface with stability within this compilation context (so an interface loaded here can only be used with things compiled together, for example on system.img). If this is set to
"vintf", this corresponds to a stability promise: the interface must be kept stable as long as it is used.
gen_trace: The optional flag to turn the tracing on or off. Default is
false.
host_supported: The optional flag that when set to
truemakes the generated libraries available to the host environment.
unstable: The optional flag used to mark that this interface doesn't need to be stable. When this is set to
true, the build system neither creates the API dump for the interface nor requires it to be updated.
backend.<type>.enabled: These flags toggle the each of the backends that the AIDL compiler will generate code for. Currently, three backends are supported:
java,
cpp, and
ndk. The backends are all enabled by default. When a specific backend is not needed, it needs to be disabled explicitly.
backend.<type>.apex_available: The list of APEX names that the generated stub library is available for.
backend.[cpp|java].gen_log: The optional flag that controls whether to generate additional code for gathering information about the transaction.
backend.[cpp|java].vndk.enabled: The optional flag to make this interface a part of VNDK. Default is
false.
backend.java.platform_apis: The optional flag that controls whether the Java stub library is built against the private APIs from the platform. This should be set to
"true"when
stabilityis set to
"vintf".
backend.java.sdk_version: The optional flag for specifying the version of the SDK that the Java stub library is built against. The default is
"system_current". This shouldn't be set when
backend.java.platform_apisis true.
backend.java.platform_apis: The optional flag that should be set to
truewhen the generated libraries need to build against the platform API rather than the SDK.
For each combination of the
versions and the enabled backends, a stub library
is created. See Module naming rules for how to refer
to the specific version of the stub library for a specific backend.
Writing AIDL files
Interfaces in stable AIDL are similar to traditional interfaces, with the exception that they aren't allowed to use unstructured parcelables (because these aren't stable!). The primary difference in stable AIDL is how parcelables are defined. Previously, parcelables were forward declared; in stable AIDL, parcelables fields and variables are defined explicitly.
// in a file like 'some/package/Thing.aidl' package some.package; parcelable SubThing { String a = "foo"; int b; }
A default is currently supported (but not required) for
boolean,
char,
float,
double,
byte,
int,
long, and
String. In Android
12, defaults for user-defined enumerations are also
supported. When a default is not specified, a 0-like or empty value is used.
Enumerations without a default value are initialized to 0 even if there is
no zero enumerator.
Using stub libraries
After adding stub libraries as a dependency to your module, you
can include them into your files. Here are examples of stub libraries in the build
system (
Android.mk can also be used for legacy module definitions):
cc_... { name: ..., shared_libs: ["my-module-name-cpp"], ... } # or java_... { name: ..., // can also be shared_libs if desire is to load a library and share // it among multiple users or if you only need access to constants static_libs: ["my-module-name-java"], ... }
Example in C++:
#include "some/package/IFoo.h" #include "some/package/Thing.h" ... // use just like traditional AIDL
Example in Java:
import some.package.IFoo; import some.package.Thing; ... // use just like traditional AIDL
Versioning interfaces
Declaring a module with name foo also creates a target in the build system
that you can use to manage the API of the module. When built, foo-freeze-api
adds a new API definition under
api_dir or
aidl_api/name, depending on the Android version, and adds
a
.hash file, both representing the newly frozen version of the interface.
Building this also updates the
versions property to reflect the additional
version. Once the
versions property is specified, the build system runs
compatibility checks between frozen versions and also between Top of Tree (ToT)
and the latest frozen version.
In addition, you need to manage ToT version's API definition. Whenever an API is
updated, run foo-update-api to update
aidl_api/name/current
which contains ToT version's API definition.
To maintain the stability of an interface, owners can add new:
- Methods to the end of an interface (or methods with explicitly defined new serials)
- Elements to the end of a parcelable (requires a default to be added for each element)
- Constant values
- In Android 11, enumerators
- In Android 12, fields to the end of a union
No other actions are permitted, and no one else can modify an interface (otherwise they risk collision with changes that an owner makes).
Using versioned interfaces
Interface methods
At runtime, when trying to call new methods on an old server, new clients get either an error or an exception, depending on the backend.
cppbackend gets
::android::UNKNOWN_TRANSACTION.
ndkbackend gets
STATUS_UNKNOWN_TRANSACTION.
javabackend gets
android.os.RemoteExceptionwith a message saying the API is not implemented.
For strategies to handle this see querying versions and using defaults.
Parcelables
When new fields are added to parcelables, old clients and servers drop them. When new clients and servers receive old parcelables, the default values for new fields are automatically filled in. This means that defaults need to be specified for all new fields in a parcelable.
Clients should not expect servers to use the new fields unless they know the server is implementing the version that has the field defined (see querying versions).
Enums and constants
Similarly, clients and servers should either reject or ignore unrecognized constant values and enumerators as appropriate, since more may be added in the future. For example, a server should not abort when it receives an enumerator that it doesn't know about. It should either ignore it, or return something so the client knows it's unsupported in this implementation.
Unions
Trying to send a union with a new field fails if the receiver is old and
doesn't know about the field. The implementation will never see the union with
the new field. The failure is ignored if it's a
oneway transaction; otherwise the error is
BAD_VALUE(for the C++ or NDK backend)
or
IllegalArgumentException(for the Java backend). The error is received if
the client is sending a union set to the new field to an old server, or when
it's an old client receiving the union from a new server.
Module naming rules
In Android 11, for each combination of the versions and the
backends enabled, a stub library module is automatically created. To refer to
a specific stub library module for linking, don't use the name of the
aidl_interface module, but the name of the stub library module, which is
ifacename-version-backend, where
ifacename: name of the
aidl_interfacemodule
versionis either of
Vversion-numberfor the frozen versions
Vlatest-frozen-version-number + 1for the tip-of-tree (yet-to-be-frozen) version
backendis either of
javafor the Java backend,
cppfor the C++ backend,
ndkor
ndk_platformfor the NDK backend. The former is for apps, and the latter is for platform usage.
Assume that there is a module with name foo and its latest version is 2, and it supports both NDK and C++. In this case, AIDL generates these modules:
- Based on version 1
foo-V1-(java|cpp|ndk|ndk_platform)
- Based on version 2 (the latest stable version)
foo-V2-(java|cpp|ndk|ndk_platform)
- Based on ToT version
foo-V3-(java|cpp|ndk|ndk_platform)
Compared to Android 11,
foo-backend, which referred to the latest stable version becomes
foo-V2-backend
foo-unstable-backend, which referred to the ToT version becomes
foo-V3-backend
The output file names are always the same as module names.
- Based on version 1:
foo-V1-(cpp|ndk|ndk_platform).so
- Based on version 2:
foo-V2-(cpp|ndk|ndk_platform).so
- Based on ToT version:
foo-V3-(cpp|ndk|ndk_platform).so
Note that the AIDL compiler doesn't create either an
unstable version module,
or a non-versioned module for a stable AIDL interface.
As of Android 12, the module name generated from a
stable AIDL interface always includes its version.
New meta interface methods
Android 10 adds several meta interface methods for the stable AIDL.
Querying the interface version of the remote object
Clients can query the version and hash of the interface that the remote object is implementing and compare the returned values with the values of the interface that the client is using.
Example with the
cpp backend:
sp<IFoo> foo = ... // the remote object int32_t my_ver = IFoo::VERSION; int32_t remote_ver = foo->getInterfaceVersion(); if (remote_ver < my_ver) { // the remote side is using an older interface } std::string my_hash = IFoo::HASH; std::string remote_hash = foo->getInterfaceHash();
Example with the
ndk (and the
ndk_platform) backend:
IFoo* foo = ... // the remote object int32_t my_ver = IFoo::version; int32_t remote_ver = 0; if (foo->getInterfaceVersion(&remote_ver).isOk() && remote_ver < my_ver) { // the remote side is using an older interface } std::string my_hash = IFoo::hash; std::string remote_hash; foo->getInterfaceHash(&remote_hash);
Example with the
java backend:
IFoo foo = ... // the remote object int myVer = IFoo.VERSION; int remoteVer = foo.getInterfaceVersion(); if (remoteVer < myVer) { // the remote side is using an older interface } String myHash = IFoo.HASH; String remoteHash = foo.getInterfaceHash();
For Java language, the remote side MUST implement
getInterfaceVersion() and
getInterfaceHash() as follows:
class MyFoo extends IFoo.Stub { @Override public final int getInterfaceVersion() { return IFoo.VERSION; } @Override public final String getInterfaceHash() { return IFoo.HASH; } }
This is because the generated classes (
IFoo,
IFoo.Stub, etc.) are shared
between the client and server (for example, the classes can be in the boot
classpath). When classes are shared, the server is also linked against the
newest version of the classes even though it might have been built with an older
version of the interface. If this meta interface is implemented in the shared
class, it always returns the newest version. However, by implementing the method
as above, the version number of the interface is embedded in the server's code
(because
IFoo.VERSION is a
static final int that is inlined when referenced)
and thus the method can return the exact version the server was built with.
Dealing with older interfaces
It's possible that a client is updated with the newer version of an AIDL
interface but the server is using the old AIDL interface. In such cases,
calling a method on an old interface returns
UNKNOWN_TRANSACTION.
With stable AIDL, clients have more control. In the client side, you can set a default implementation to an AIDL interface. A method in the default implementation is invoked only when the method isn't implemented in the remote side (because it was built with an older version of the interface). Since defaults are set globally, they should not be used from potentially shared contexts.
Example in C++:
class MyDefault : public IFooDefault { Status anAddedMethod(...) { // do something default } }; // once per an interface in a process IFoo::setDefaultImpl(std::unique_ptr<IFoo>(MyDefault)); foo->anAddedMethod(...); // MyDefault::anAddedMethod() will be called if the // remote side is not implementing it
Example in Java:
IFoo.Stub.setDefaultImpl(new IFoo.Default() { @Override public xxx anAddedMethod(...) throws RemoteException { // do something default } }); // once per an interface in a process foo.anAddedMethod(...);
You don't need to provide the default implementation of all methods in an AIDL
interface. Methods that are guaranteed to be implemented in the remote side
(because you are certain that the remote is built when the methods were in the
AIDL interface description) don't need to be overridden in the default
impl
class.
Converting existing AIDL to structured/stable AIDL
If you have an existing AIDL interface and code that uses it, use the following steps to convert the interface to a stable AIDL interface.
Identify all of the dependencies of your interface. For every package the interface depends on, determine if the package is defined in stable AIDL. If not defined, the package must be converted.
Convert all parcelables in your interface into stable parcelables (the interface files themselves can remain unchanged). Do this by expressing their structure directly in AIDL files. Management classes must be rewritten to use these new types. This can be done before you create an
aidl_interfacepackage (below).
Create an
aidl_interfacepackage (as described above) that contains the name of your module, its dependencies, and any other information you need. To make it stabilized (not just structured), it also needs to be versioned. For more information, see Versioning interfaces. | https://source.android.com/devices/architecture/aidl/stable-aidl?hl=en&%3Bauthuser=0&authuser=0 | CC-MAIN-2022-05 | refinedweb | 2,450 | 55.03 |
A reader for a data format used by Omega3p, Tau3p, and several other tools used at the Standford Linear Accelerator Center (SLAC). More...
#include <vtkSLACParticleReader.h>
A reader for a data format used by Omega3p, Tau3p, and several other tools used at the Standford Linear Accelerator Center (SLAC).
The underlying format uses netCDF to store arrays, but also imposes some conventions to store a list of particles in 3D space.
This reader supports pieces, but in actuality only loads anything in piece 0. All other pieces are empty.
Definition at line 52 of file vtkSLACParticleReader.h.
Definition at line 55 of file vtkSLACParticleReader.
Returns true if the given file can be read by this reader.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Convenience function that checks the dimensions of a 2D netCDF array that is supposed to be a set of tuples.
It makes sure that the number of dimensions is expected and that the number of components in each tuple agree with what is expected. It then returns the number of tuples. An error is emitted and 0 is returned if the checks fail.
Definition at line 71 of file vtkSLACParticleReader.h. | https://vtk.org/doc/nightly/html/classvtkSLACParticleReader.html | CC-MAIN-2019-39 | refinedweb | 202 | 58.89 |
The QCamera class provides interface for system camera devices. More...
#include <QCamera>
This class is under development and is subject to change.
Inherits QMediaObject.
The QCamera class provides interface for system camera devices.
QCamera can be used with QVideoWidget for viewfinder display and QMediaRecorder for video recording.
camera = new QCamera; viewFinder = new QVideoWidget(camera); viewFinder->show(); recorder = QMediaRecorder(camera); camera->start();
The Camera API of Qt Mobility is still in ALPHA. It has not undergone the same level of review and testing as the rest of the APIs.
The API exposed by the classes in this component are not stable, and will undergo modification or removal prior to the final release of Qt Mobility.
The ExposureModes type is a typedef for QFlags<ExposureMode>. It stores an OR combination of ExposureMode values.
The FlashModes type is a typedef for QFlags<FlashMode>. It stores an OR combination of FlashMode values.
The FocusModes type is a typedef for QFlags<FocusMode>. It stores an OR combination of FocusMode values.
The MeteringModes type is a typedef for QFlags<MeteringMode>. It stores an OR combination of MeteringMode values.
The WhiteBalanceModes type is a typedef for QFlags<WhiteBalanceMode>. It stores an OR combination of WhiteBalanceMode values.
This property holds lens aperture is specified as an f-number, the ratio of the focal length to effective aperture diameter.
Access functions:
Notifier signal:
This property holds the sensor ISO sensitivity. Lower sensitivity, the noise is lower, but more light is needed.
Access functions:
Notifier signal:
Indicates the camera is ready to capture an image immediately.
Access functions:
Notifier signal:
This property holds the effective length of time the shutter is open in seconds.
Access functions:
Notifier signal:
This property holds the current state of the camera object.
Access functions:
Notifier signal:
Construct a QCamera from service provider and parent.
Construct a QCamera from device name device and parent.
Destroys the camera object.
Signal emitted when aperature changes to value.
Signal emitted when aperature range has changed.
Returns a list of camera device's available from the default service provider.
Capture the image and save it to file. This operation is asynchronous in majority of cases, followed by signal QCamera::imageCaptured() or error()
Returns the description of the device.
Returns the error state of the object.
Signal emitted when error state changes to value.
Returns a string describing a camera's error state.
Returns the exposure compensation.
See also setExposureCompensation().
Signal emitted when exposure locked.
Returns the exposure mode being used.
See also setExposureMode().
Returns the flash mode being used.
See also setFlashMode().
Signal emitted when flash status changed, flash is ready if ready true.
Signal emitted when focus is locked.
Returns the focus mode being used.
See also setFocusMode().
Returns the focus status
Signal emitted when focus status changed.
Signals that an image intendec to be saved to to fileName has been captured and a preview is available.
Signals that an captured image has been saved to fileName.
Return true if exposure locked.
Returns true if flash is charged.
Return true if focus locked.
Returns true if macro focusing is supported.
Signal emitted when sensitivity changes to value.
Lock the exposure.
Lock the focus.
Returns true if macro focusing enabled.
See also setMacroFocusingEnabled().
Returns the current color temperature if the manual white balance is active, otherwise the return value is undefined.
See also setManualWhiteBalance().
Returns the largest supported aperture.
See also minimumAperture() and aperture().
Returns the maximum digital zoom
Returns the largest supported ISO sensitivity.
Returns the maximum optical zoom
Returns the largest supported shutter speed.
See also shutterSpeed().
Returns the metering mode being used.
See also setMeteringMode().
Returns the smallest supported aperture as an F number, corresponding to wide open lens. For example if the camera lenses supports aperture range from F/1.4 to F/32, the minumum aperture value will be 1.4, the maximum - 32
Returns the smallest supported ISO sensitivity.
Returns the smallest supported shutter speed.
Signals that a camera's ready for capture state has changed.
Turn on auto aperture
Turn on auto sensitivity
Turn on auto shutter speed
Sets the exposure compensation to ev
See also exposureCompensation().
Set exposure mode to mode
See also exposureMode().
Set the flash mode to mode
See also flashMode().
Set the focus mode to mode
See also focusMode().
Set macro focusing to enabled.
See also macroFocusingEnabled().
Sets manual white balance to colorTemperature
See also manualWhiteBalance().
Sets the metering mode to mode.
See also meteringMode().
Sets the white balance to mode.
See also whiteBalanceMode().
Signals that a camera's shutter speed has changed.
Starts the camera.
This can involve powering up the camera device and can be asynchronyous.
State is changed to QCamera::ActiveState if camera is started succesfuly, otherwise error() signal is emited.
Signal emitted when state of the Camera object has changed.
Stops the camera.
Returns the list of apertures if camera supports only fixed set of aperture values, otherwise returns an empty list.
Return the exposure modes available.
Returns the flash modes available.
Returns the focus modes available.
Returns the list of ISO senitivities if camera supports only fixed set of ISO sensitivity values, otherwise returns an empty list.
Returns the metering modes available.
Returns the list of shutter speed values if camera supports only fixed set of shutter speed values, otherwise returns an empty list.
Returns the white balance modes available.
Unlock the exposure.
Unlock the focus.
Returns the white balance mode being used.
See also setWhiteBalanceMode().
Set the zoom to value.
Returns the current zoom.
Signal emitted when zoom value changes to new value. | http://doc.qt.digia.com/qtmobility-1.0-tp/qcamera.html | CC-MAIN-2014-10 | refinedweb | 921 | 54.18 |
Re: STRING length
- From: "markww" <markww@xxxxxxxxx>
- Date: 14 Nov 2006 17:09:06 -0800
Actually please disregard the previous post, I found I needed to with
and use:
Ada.Text_IO.Unbounded_IO
So.. finally one compilation error remains, the actual call to my
function:
begin
Add_Record("Mark", "555-555-5555", "123 main street");
end LinkList;
the error is:
expected private type "Ada.Strings.Unbounded.Unbounded_String"
what does that mean? The problem is with calling the procedure, if I
comment the call out compilation is successful. The procedure looks
like:
procedure Add_Record(strName : in UNBOUNDED_STRING;
strPhone : in UNBOUNDED_STRING;
strAddress : in UNBOUNDED_STRING)
is
Temp : CHAR_REC_POINT;
begin
Put(strName);
New_Line;
Put(strPhone);
New_Line;
Put(strAddress);
New_Line;
end Add_Record;
I guess the compiler doesn't interpret a literal string as an
unbounded_string?
Thanks,
Mark
On Nov 14, 7:44 pm, "markww" <mar...@xxxxxxxxx> wrote:
Thanks Georg, that looks to be exactly what I need. I do have a problem
'#including', or, 'withing' rather unbounded string.
Now the head of my source file looks like:
with Ada.Text_IO, Ada.Integer_Text_IO, Ada.Strings.Unbounded;
use Ada.Text_IO, Ada.Integer_Text_IO;
which is alright but as soon as I try:
use Ada.Strings.Unbounded;
the compiler gives me a bunch of errors, it seems to conflict with
Text_IO / Integer_Text_IO? Seems like all my previous calls to Put()
now became invalid. I'm not familiar with Ada but with C++ and
understand namespace collisions, is the same thing going on here?
Thanks
On Nov 14, 5:24 pm, Georg Bauhaus <bauh...@xxxxxxxxxxxxx> wrote:
On Tue, 2006-11-14 at 14:51 -0800, markww wrote:
Hi,
How does one use a variable length string in ada?You use variable length strings in Ada by declaring themto be of type UNBOUNDED_STRING which is defined in
Ada.Strings.Unbounded.
type MY_RECORD is
record
Name: UNBOUNDED_STRING;
Phone: UNBOUNDED_STRING;
Address: UNBOUNDED_STRING;
end record;
Given that Phone is likely to be limited in length, you
could consider declaring the Phone component to be of
type BOUNDED_STRING, which is a string type with a maximum length.
Unlike STRING, objects of this type can have any number
of characters up to the maximum. See Ada.Strings.Bounded.
Yet another use of strings is in nested scopes: If you need
a string in just one place, e.g. temporarily, you can use
a plain STRING as in
declare
temp: constant STRING := some_string_returning_func(...);
begin
-- use temp
end;
The point here is that the `temp` string variable takes
its bound from the initialization. You can also make it a
variable, if you need to write to string components.
See
-- Georg- Hide quoted text -- Show quoted text -
.
- Follow-Ups:
- Re: STRING length
- From: Jeffrey R. Carter
- Re: STRING length
- From: Ludovic Brenta
- References:
- STRING length
- From: markww
- Re: STRING length
- From: Georg Bauhaus
- Re: STRING length
- From: markww
- Prev by Date: Re: STRING length
- Next by Date: Re: STRING length
- Previous by thread: Re: STRING length
- Next by thread: Re: STRING length
- Index(es): | http://coding.derkeiler.com/Archive/Ada/comp.lang.ada/2006-11/msg00166.html | crawl-002 | refinedweb | 493 | 62.27 |
- the difference?
Sorry, couldn't resist.
Admin
Yeah, seriously. Real men code in VB6!
*runs*
Admin
Long names for identifiers or depth of classes does not directly correspond to "more code". It's still a single statement, which is fine with me. Smalltalk, Java and Delphi programmers have dealt with this for years, new C# coders coming from a procedureal environment (Pascal, C, REXX, etc) think it's abhorrent to have such "busy" names for identifiers.
All I can say is... take a gander at any C or C++ header file for Windows and tell me the identifers for some of those constants don't throw you for a loop!
Admin
See, there's this fabulous thing called a using clause that lets you abbreviate namespace complexity.
Consider System.Console.Writeline.
You could do this:
using System;
using out = System.Console;
Now, you can do this:
out.WriteLine("foo");
Better yet, since you have assigned an identifier of "out" to System.Console, you can easily switch out System.Console to something else. I have done this very thing.
I wrote a static class called SocketReporter which is a multithreaded tcp server that resembles the console. As the app runs, all output going to out is sent to any tcp clients which are connected to the SocketReporter service when it's running. This makes it a breeze to debug Win32 service apps. I can run the .exe in the debugger and have it write to the system console when necessary, then swap over to SocketReporter with a conditional compile for Release mode when I'm running the service on a server and monitor its output.
Admin
Couldn't you just declare a variable and point it to System.Console?
Admin
Note that the Debug.WriteLine is only necessary if you want to print to the debug window, and could be replaced by either MessageBox.Show() or Console.WriteLine() (depending on whether it's a console or windows forms app), which would be a fairer comparison. You can also shorten it down quite a lot, either by using the "using" keyword to specify the namespace at the top of your source file, or by using a different choice of methods.
Oh, and your Java code wouldn't actually compile. You're missing a ')'.
How about this:
System.Windows.Forms.MessageBox.Show(Convert.ToInt32("ABC", 16));
which, if you have System.Windows.Forms included in your imported classes at the top of your .cs file, can become this:
MessageBox.Show(Convert.ToInt32("ABC", 16));
The actual conversion is the Convert.ToInt32() method. "But sir, I don't know how to read MSDN! Waaa!"
On a slightly more controversial note, are we now judging languages solely on how many characters I need to type? I'll stick with C#, since it doesn't have classes infested with horrible Hungarian notation (unlike C++), and the IDE takes most of the work out of it anyway with Intellisense.
Admin
Darn will I ever get used to the way this forum works - I always forget I'm only on the first page. Anyway...
Er, isn't that what he did?
Admin
Yeah, relying on your compiler to tell you when you have compile errors 'encourages laziness'
Admin
LOL. I was just nitpicking, I know.
Admin
I'm self taught too.
The point I was making was that setting a variable equal to a number means that the variable contains just that - a number. It doesn't matter what syntax you use to set the value; the end result's the same.
Therefore, when you print it out again, of course it's going to print out as a decimal! It's not a bug and it's not a special feature. It's basic behaviour of the language.
<FONT face="Lucida Console" size=2> var hex1 = 0x0f;</FONT>
<FONT face="Lucida Console" size=2>does exactly the same thing as:</FONT>
<FONT face="Lucida Console" size=2> var hex1 = 15;</FONT>
<FONT face="Lucida Console" size=2>They both set the variable hex1 equal to the number 15.</FONT>
So when you print out the contents of hex1, you're going to get 15.
Admin
John,
why do some coders become touchy if someone says something that might possibly shed a bad light onto their language? No language is 'perfect'.
I have replied to the question: "will you smugly show a way to do it with 50% less code?" No more, no less.
I am not interested in something as useless as language bashing.
Admin
"I'll stick with C#, since it doesn't have classes infested with horrible Hungarian notation (unlike C++)"
I'm sorry - where in the standard C++ language is there any hungarian notation? In fact, the guys behind C++ are pretty much against hungarian notation, full stop.
Now some Microsoft libraries may have hungarian in them, but that's Microsoft. And they may have finally learnt their lesson if, as you imply, C# doesn't have it in its standard libraries.
Admin
I was somehow under the impression that a number var would retain the format it was defined in.
So it doesn't.
I can sleep again.
Admin
Sorry, I got a bit carried away. I just wanted to point out that it wasn't a fair comparison. But the point stands - the fact that JavaScript can do it in fewer characters doesn't make JavaScript a better language. If you weren't trying to make that comparison, I can't see wat point you were trying to make.
About hungarian notation - it's not just the standard library, there's also all the 3rd-party libraries, such as the VCL, STL or MFC. All these IIRC favour hungarian notation in some form, or like to use other forms of cryptic abbreviations for their member names.
Admin
VCL I don't know, and a google for it gives, as a first hit, a library of 'furry' art which I dare not click. MFC is MS, so doesn't count as 'third party'
STL may have some oddish naming conventions internally, but externally it's not hungarian..
Admin
VCL is Borland's Visual Component Library. Basically the STL for graphical controls. The STL I seem to remember having lots of strange names - didn't they have T (for Type) as a prefix to most class names?. MFC I count as 3rd party, unless you're saying it's standard C++. When "standard libray" was mentioned earlier, I assumed it was talking about standard C++ as defined by Bjarne Stroustrup, which MFC is not.
Admin
Ah, right. I know nothing of it.
Nope. That would be mfc or root or something like that. STL is nice clean 'vector<foo>' and the suchlike. There is '_Tp', which is a pointer to a type, but I'm fairly sure it's only used internally. STL is actually very readable and usable once you get the hang of it, but these days I don't torture myself with C++ unless I absolutely have to :)
RogueWave was all 'Collectable' and so on, IIRC, can't remember any hungarian there either. It was still horrible to use, though, and a monstrous pain if you upgraded versions.
It's been a while though.
Indeed. What I posted was somewhat misleading, I was trying to make the distinction between 'standard', '3rd party' and 'Microsoft'. My bad. I was discounting MFC due to its very microsoftness.
In summary, I'd say that, yes, there are 3rd party non-MS class libraries that use hungarian out there, but probably not the majority.
Simon
Admin
<FONT face="Courier New" size=2>s/abbreviate/abuse</FONT>
Admin
Microsoft Soviet-Era C++ programming:
BOOL CVeryBigClass::GetResult()
{
BOOL bBoolResult = (
GetScaryApiInfoEx(
m_hHandle,
m_nNumber,
m_lpszString,
NULL,
NULL,
NULL,
NULL,
// Many more lines of NULLs
// omitted for brevity
NULL)) ? TRUE : FALSE;
if (bBoolResult == TRUE)
return TRUE;
else if (bBoolResult != TRUE)
return FALSE;
else
return FALSE;
}
Admin
Well, at least this code can easily be changed to add very large hex numbers, much bigger than the largest value of <b>int</b>. That's something you can't do with Int.Parse(). It is obviously superfluous for 8 digit hex numbers; maybe the code was taken from a library that handles larger numbers, e.g. for calculating RSA keys. BTW: Building a solution with Int32.Parse()+Int32.Parse() would be faster, but give you wrong results when you have an integer overflow. To savely add two 8-digit hex numbers, you need at least 33 bit.
Admin
@tufty: We are talking about the same STL, right? As in Borland's Standard Template Library (IIRC, and it has been a while, Borland did pretty much the same thing as MFC - a component library, but split into visual and non-visual components). Guess it must just be the VCL and whatever they called their non-visual library (which I thought was called the STL but I guess not). I know that they have a lot of stuff like TData, TQuery and so on.
No worries
I dunno, both MS and Borland do it (though MS has indeed strated to ditch this strategy, if only for cross-language compatibility and cleanness in .NET) and most people take their cue from them. Certainly, the software (written in Delphi with a bit of C++) we produce at work uses it extensively, and it was what I was always taught to do. It might be different in the non-Windows world, though.
Admin
No, then. Most certainly not. If Borland have called their abortion 'STL', they are asshats of a fairly major calibre. is the STL
:)
Admin
A number is an abstract concept and does NOT have a base. A base only comes into play when your want to represent a number physically as a sequence of digits. Internally, all numbers are in fact stored in binary. Other bases only become relevant when there is a conversion between the internal format and a string. That's what happens to the number definition in the source code: the compiler's or interpreter's parser turns the string "<font face="Lucida Console" size="2">0x0f</font>" into the internal binary representation, using base 16 because it starts with 0x.. When you want to turn it back into a string, that's when the base again becomes relevant, the default being 10.
Admin
It isn't clear if the sign of the hex strings is to be treated like VB normally treats them. It appears that the two hex strings are added as unsigned (positive) values.
Unfortunately, VB only supports hex literals and operations that can be represented as a long integer. Long Integers are always signed.
There are at least three datatypes with enough bits to accomodate the largest possible value of the sum of two 8 hex digit strings (Double, Currency, Decimal). The problem with this approach comes in when trying to convert the sum back to hex characters (see prior point).
(snicker) <FONT face="Courier New">"= Right(h3,16)"</FONT>
I wonder if this code wasn't originally part of Excel VBA, with the <FONT face="Courier New">hex2dec()</FONT> function reference. ;-)
With no specific datatype declaratives, I assume that either this code is run in a scripting environment (ASP, VBScript) or performance has never been a concern of the programmer or the IT staff usually concerned with such things.
==========================
For brevity and efficiency, and sticking within the VB language, I would suggest the following solution.
<FONT color=#000000>My implementation example:</FONT>
<FONT face="Courier New">Function AddHex(hex1, hex2)
'Adds two hexadecimal strings (up to 8-digits each)</FONT>
<FONT face="Courier New"> Dim h1, h2, hRight6, Carry, hLeft4
'Step1 - convert to fixed length (8) strings
h1 = Right("00000000" & hex1, 8)
h2 = Right("00000000" & hex2, 8)
'Step2 - add right most 6 hex digits
hRight6 = Right("00000000" & Hex(CLng("&h" & Right(h1, 6)) + CLng("&h" & Right(h2, 6))), 8)
Carry = Left(hRight6, 2)
hRight6 = Right(hRight6, 6)
'Step3 - add the high-order digits and the carry (if any)
hLeft4 = Hex(CLng("&h" & Left(h1, 2)) + CLng("&h" & Left(h2, 2)) + CLng("&h" & Carry))
'Step4 and Step5 - concatenate the hex strings and make fixed length (20)
AddHex = Right("0000000000" & hLeft4 & hRight6, 10)
End Function</FONT>
=========================
If this needed to be extended to larger hex strings, I would probably use three numeric arrays, with each position holding the value of pairs of hex digits. The third array would be used for carry data. Output would consist of individual array item conversion from numeric value to hexidecimal string equivalence using the <FONT face="Courier New">Right("0" & Hex(),2)</FONT> function combinations as shown above.
Admin
Ah, VCL, STL and MFC. Third party libraries. One of these things is not like the others.
You WERE making some sort of obscure joke, right? Right?
*cries*
Admin
Read the exchanges between me and tufty.
I remembered where I got that idea from, I think. It was a college tutor. We were doing basic C++, and I asked if I should read about the STL. He said no, that was just a bunch of libraries written by Borland, similar to MFC, and the knowledge would be useless if you weren't using C++ Builder. I left that course soon after because they were cutting off the programming side in the second year (maybe that was why). That was the last time (apart from an abortive attempt at a game with some other guys) that I looked at C++. I decided I preferred C#.
Admin
In VB, the "&H" is the hex identifier. So whereas you would write something like this in C#:
int i = 0xFA;
You would write this in VB.NET:
dim i as integer = &HFA
Or this in VB6:
dim i as integer
i = &HFA
As for C#'s conversion of hex strings:
string hexValue = "FA";
int i = Convert.ToInt32(hexValue,16); // second parameter indicates the base to convert from
Admin
Sorry ... I'm new to the board and still not sure how to include the original message in my reply. The above message was meant as a reply to the message that said:
"Ok I'm not familiar with VB at all...what does "&H" accomplish? And how would this be accomplished in a language like C#? Because I'm not aware of any way to convert a hex string to int or long in that language. Int32.Parse( string ) will throw an exception if you give it something like "F" or "0xF"..." | https://thedailywtf.com/articles/comments/Hexing_Around/3 | CC-MAIN-2019-26 | refinedweb | 2,424 | 72.56 |
By teaching you how to code machine-learning algorithms using a test-driven approach, this practical book helps you gain
343 49 8MB
English Pages xii, 201 pages: illustrations; 24 cm [216] Year 2017
By teaching you how to code machine-learning algorithms using a test-driven approach, this practical book helps you gain
341 83 3MB Read more
Probably approximately correct software -- A quick introduction to machine learning -- K-nearest neighbors -- Naive Baye
598 131 8MB Read more
Probably approximately correct software -- A quick introduction to machine learning -- K-nearest neighbors -- Naive Baye
410 18 8MB Read more
Learn how to apply test-driven development (TDD) to machine-learning algorithms--and catch mistakes that could sink your
264 69 19MB Read more
Learn how to apply test-driven development (TDD) to machine-learning algorithms--and catch mistakes that could sink your
317 78 3MB Read more
Learn how to apply test-driven development (TDD) to machine-learning algorithms--and catch mistakes that could sink your
235 17 19MB Read more
Learn how to apply test-driven development (TDD) to machine-learning algorithms--and catch mistakes that could sink your
349 93 6MB Read more
Table of contents :
Table of Contents......Page 5
Conventions Used in This Book......Page 11
O’Reilly Safari......Page 12
Acknowledgments......Page 13
Chapter 1. Probably Approximately Correct Software......Page 15
SOLID......Page 16
Testing or TDD......Page 18
Refactoring......Page 19
Writing the Right Software......Page 20
What Exactly Is Machine Learning?......Page 21
The High Interest Credit Card Debt of Machine Learning......Page 22
SOLID Applied to Machine Learning......Page 23
TDD: Scientific Method 2.0......Page 26
The Plan for the Book......Page 27
Supervised Learning......Page 29
Unsupervised Learning......Page 30
What Can Machine Learning Accomplish?......Page 31
Mathematical Notation Used Throughout the Book......Page 32
Conclusion......Page 33
How Do You Determine Whether You Want to Buy a House?......Page 35
Hedonic Regression......Page 36
What Is a Neighborhood?......Page 37
K-Nearest Neighbors......Page 38
Triangle Inequality......Page 39
Geometrical Distance......Page 40
Computational Distances......Page 41
Statistical Distances......Page 43
Curse of Dimensionality......Page 45
Guessing K......Page 46
Heuristics for Picking K......Page 47
Valuing Houses in Seattle......Page 49
Coding and Testing Design......Page 50
KNN Regressor Construction......Page 51
KNN Testing......Page 53
Conclusion......Page 55
Using Bayes’ Theorem to Find Fraudulent Orders......Page 57
Probability Symbols......Page 58
Inverse Conditional Probability (aka Bayes’ Theorem)......Page 60
Naiveté in Bayesian Reasoning......Page 61
Pseudocount......Page 63
Coding and Testing Design......Page 64
Tokenization and Context......Page 68
SpamTrainer......Page 70
Error Minimization Through Cross-Validation......Page 76
Conclusion......Page 79
Chapter 5. Decision Trees and Random Forests......Page 81
The Nuances of Mushrooms......Page 82
Classifying Mushrooms Using a Folk Theorem......Page 83
Finding an Optimal Switch Point......Page 84
Information Gain......Page 85
GINI Impurity......Page 86
Pruning Trees......Page 87
Ensemble Learning......Page 88
Writing a Mushroom Classifier......Page 90
Conclusion......Page 97
Tracking User Behavior Using State Machines......Page 99
Emissions/Observations of Underlying States......Page 101
Using Markov Chains Instead of a Finite State Machine......Page 103
Mathematical Representation of the Forward-Backward Algorithm......Page 104
Using User Behavior......Page 105
The Decoding Problem Through the Viterbi Algorithm......Page 108
Part-of-Speech Tagging with the Brown Corpus......Page 109
Coding and Testing Design......Page 110
The Seam of Our Part-of-Speech Tagger: CorpusParser......Page 111
Writing the Part-of-Speech Tagger......Page 113
Cross-Validating to Get Confidence in the Model......Page 119
Conclusion......Page 120
Chapter 7. Support Vector Machines......Page 121
Sentiment Classification Using SVMs......Page 122
The Theory Behind SVMs......Page 123
Decision Boundary......Page 124
Kernel Trick: Feature Transformation......Page 125
Setup Notes......Page 128
Coding and Testing Design......Page 129
Corpus Class......Page 130
CorpusSet Class......Page 133
Model Validation and the Sentiment Classifier......Page 136
Aggregating Sentiment......Page 139
Exponentially Weighted Moving Average......Page 140
Mapping Sentiment to Bottom Line......Page 141
Conclusion......Page 142
Chapter 8. Neural Networks......Page 143
Boolean Logic......Page 144
How to Construct Feed-Forward Neural Nets......Page 145
Input Layer......Page 146
Hidden Layers......Page 148
Neurons......Page 149
Activation Functions......Page 150
Training Algorithms......Page 155
Back Propagation......Page 156
RProp......Page 157
How Many Hidden Layers?......Page 159
Tolerance for Error and Max Epochs......Page 160
Coding and Testing Design......Page 161
Writing the Seam Test for Language......Page 162
Cross-Validating Our Way to a Network Class......Page 165
Wrap-Up of Example......Page 168
Conclusion......Page 169
Studying Data Without Any Bias......Page 171
User Cohorts......Page 172
Silhouette Coefficient......Page 174
The K-Means Algorithm......Page 175
EM Clustering......Page 177
Algorithm......Page 178
The Impossibility Theorem......Page 179
Gathering the Data......Page 180
Coding Design......Page 181
Analyzing the Data with K-Means......Page 182
EM Clustering Our Data......Page 183
The Results from the EM Jazz Clustering......Page 188
Conclusion......Page 190
Debate Club......Page 191
Feature Selection......Page 192
Exhaustive Search......Page 194
A Better Feature Selection Algorithm......Page 196
Minimum Redundancy Maximum Relevance Feature Selection......Page 197
Principal Component Analysis......Page 199
Independent Component Analysis......Page 200
Ensemble Learning......Page 202
Boosting......Page 203
Conclusion......Page 205
Machine Learning Algorithms Revisited......Page 207
What’s Next for You?......Page 209
Index......Page 211
Colophon......Page 216
Thoughtful Machine Learning with
Python A TEST-DRIVEN APPROACH
Matthew Kirk
Thoughtful Machine Learning with Python A Test-Driven Approach
Matthew Kirk
Beijing
Boston Farnham Sebastopol
Tokyo
Thoughtful Machine Learning with Python by Matthew Kirk Copyright © 2017 Matthew Kirk. [email protected]
Editors: Mike Loukides and Shannon Cutt Production Editor: Nicholas Adams Copyeditor: James Fraleigh Proofreader: Charles Roumeliotis
Indexer: Wendy Catalano Interior Designer: David Futato Cover Designer: Randy Comer Illustrator: Rebecca Demarest
First Edition
January 2017:
Revision History for the First Edition 2017-01-10:
First Release
See for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Thoughtful Machine Learning with Python,2413-6 [LSI]
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1. Probably Approximately Correct Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Writing Software Right SOLID Testing or TDD Refactoring Writing the Right Software Writing the Right Software with Machine Learning What Exactly Is Machine Learning? The High Interest Credit Card Debt of Machine Learning SOLID Applied to Machine Learning Machine Learning Code Is Complex but Not Impossible TDD: Scientific Method 2.0 Refactoring Our Way to Knowledge The Plan for the Book
2 2 4 5 6 7 7 8 9 12 12 13 13
2. A Quick Introduction to Machine Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 What Is Machine Learning? Supervised Learning Unsupervised Learning Reinforcement Learning What Can Machine Learning Accomplish? Mathematical Notation Used Throughout the Book Conclusion
15 15 16 17 17 18 19
3. K-Nearest Neighbors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 How Do You Determine Whether You Want to Buy a House? How Valuable Is That House? Hedonic Regression
21 22 22 iii
What Is a Neighborhood? K-Nearest Neighbors Mr. K’s Nearest Neighborhood Distances Triangle Inequality Geometrical Distance Computational Distances Statistical Distances Curse of Dimensionality How Do We Pick K? Guessing K Heuristics for Picking K Valuing Houses in Seattle About the Data General Strategy Coding and Testing Design KNN Regressor Construction KNN Testing Conclusion
23 24 25 25 25 26 27 29 31 32 32 33 35 36 36 36 37 39 41
4. Naive Bayesian Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Using Bayes’ Theorem to Find Fraudulent Orders Conditional Probabilities Probability Symbols Inverse Conditional Probability (aka Bayes’ Theorem) Naive Bayesian Classifier The Chain Rule Naiveté in Bayesian Reasoning Pseudocount Spam Filter Setup Notes Coding and Testing Design Data Source Email Class Tokenization and Context SpamTrainer Error Minimization Through Cross-Validation Conclusion
43 44 44 46 47 47 47 49 50 50 50 51 51 54 56 62 65
5. Decision Trees and Random Forests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 The Nuances of Mushrooms Classifying Mushrooms Using a Folk Theorem
iv
| Table of Contents
68 69
Finding an Optimal Switch Point Information Gain GINI Impurity Variance Reduction Pruning Trees Ensemble Learning Writing a Mushroom Classifier Conclusion
70 71 72 73 73 74 76 83
6. Hidden Markov Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Tracking User Behavior Using State Machines Emissions/Observations of Underlying States Simplification Through the Markov Assumption Using Markov Chains Instead of a Finite State Machine Hidden Markov Model Evaluation: Forward-Backward Algorithm Mathematical Representation of the Forward-Backward Algorithm Using User Behavior The Decoding Problem Through the Viterbi Algorithm The Learning Problem Part-of-Speech Tagging with the Brown Corpus Setup Notes Coding and Testing Design The Seam of Our Part-of-Speech Tagger: CorpusParser Writing the Part-of-Speech Tagger Cross-Validating to Get Confidence in the Model How to Make This Model Better Conclusion
85 87 89 89 90 90 90 91 94 95 95 96 96 97 99 105 106 106
7. Support Vector Machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Customer Happiness as a Function of What They Say Sentiment Classification Using SVMs The Theory Behind SVMs Decision Boundary Maximizing Boundaries Kernel Trick: Feature Transformation Optimizing with Slack Sentiment Analyzer Setup Notes Coding and Testing Design SVM Testing Strategies Corpus Class
108 108 109 110 111 111 114 114 114 115 116 116
Table of Contents
|
v
CorpusSet Class Model Validation and the Sentiment Classifier Aggregating Sentiment Exponentially Weighted Moving Average Mapping Sentiment to Bottom Line Conclusion
119 122 125 126 127 128
8. Neural Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 What Is a Neural Network? History of Neural Nets Boolean Logic Perceptrons How to Construct Feed-Forward Neural Nets Input Layer Hidden Layers Neurons Activation Functions Output Layer Training Algorithms The Delta Rule Back Propagation QuickProp RProp Building Neural Networks How Many Hidden Layers? How Many Neurons for Each Layer? Tolerance for Error and Max Epochs Using a Neural Network to Classify a Language Setup Notes Coding and Testing Design The Data Writing the Seam Test for Language Cross-Validating Our Way to a Network Class Tuning the Neural Network Precision and Recall for Neural Networks Wrap-Up of Example Conclusion
130 130 130 131 131 132 134 135 136 141 141 142 142 143 143 145 145 146 146 147 147 147 148 148 151 154 154 154 155
9. Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Studying Data Without Any Bias User Cohorts Testing Cluster Mappings
vi
|
Table of Contents
157 158 160
Fitness of a Cluster Silhouette Coefficient Comparing Results to Ground Truth K-Means Clustering The K-Means Algorithm Downside of K-Means Clustering EM Clustering Algorithm The Impossibility Theorem Example: Categorizing Music Setup Notes Gathering the Data Coding Design Analyzing the Data with K-Means EM Clustering Our Data The Results from the EM Jazz Clustering Conclusion
160 160 161 161 161 163 163 164 165 166 166 166 167 168 169 174 176
10. Improving Models and Data Extraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Debate Club Picking Better Data Feature Selection Exhaustive Search Random Feature Selection A Better Feature Selection Algorithm Minimum Redundancy Maximum Relevance Feature Selection Feature Transformation and Matrix Factorization Principal Component Analysis Independent Component Analysis Ensemble Learning Bagging Boosting Conclusion
177 178 178 180 182 182 183 185 185 186 188 189 189 191
11. Putting It Together: Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Machine Learning Algorithms Revisited How to Use This Information to Solve Problems What’s Next for You?
193 195 195
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Table of Contents
|
vii
Preface
I wrote the first edition of Thoughtful Machine Learning out of frustration over my coworkers’ lack of discipline. Back in 2009 I was working on lots of machine learning projects and found that as soon as we introduced support vector machines, neural nets, or anything else, all of a sudden common coding practice just went out the window. Thoughtful Machine Learning was my response. At the time I was writing 100% of my code in Ruby and wrote this book for that language. Well, as you can imagine, that was a tough challenge, and I’m excited to present a new edition of this book rewritten for Python. I have gone through most of the chapters, changed the examples, and made it much more up to date and useful for people who will write machine learning code. I hope you enjoy it. As I stated in the first edition, my door is always open. If you want to talk to me for any reason, feel free to drop me a line at [email protected] And if you ever make it to Seattle, I would love to meet you over coffee..
ix
Constant width italic
Shows text that should be replaced with user-supplied values or by values deter‐ mined by context. This element signifies a general note.: “Thoughtful Machine Learning with Python by Matthew Kirk (O’Reilly). Copyright 2017 Matthew Kirk, 978-1-491-92413 I’ve waited over a year to finish this book. My diagnosis of testicular cancer and the sudden death of my dad forced me take a step back and reflect before I could come to grips with writing again. Even though it took longer than I estimated, I’m quite pleased with the result. I am grateful for the support I received in writing this book: everybody who helped me at O’Reilly and with writing the book. Shannon Cutt, my editor, who was a rock and consistently uplifting. Liz Rush, the sole technical reviewer who was able to make it through the process with me. Stephen Elston, who gave helpful feedback. Mike Loukides, for humoring my idea and letting it grow into two published books.
Preface
|
xi
I’m grateful for friends, most especially Curtis Fanta. We’ve known each other since we were five. Thank you for always making time for me (and never being deterred by my busy schedule). To my family. For my nieces Zoe and Darby, for their curiosity and awe. To my brother Jake, for entertaining me with new music and movies. To my mom Carol, for letting me discover the answers, and advising me to take physics (even though I never have). You all mean so much to me. To the Le family, for treating me like one of their own. Thanks to Liliana for the Lego dates, and Sayone and Alyssa for being bright spirits in my life. For Martin and Han for their continual support and love. To Thanh (Dad) and Kim (Mom) for feeding me more food than I probably should have, and for giving me multimeters and books on opamps. Thanks for being a part of my life. To my grandma, who kept asking when she was going to see the cover. You’re always pushing me to achieve, be it through Boy Scouts or owning a business. Thank you for always being there. To Sophia, my wife. A year ago, we were in a hospital room while I was pumped full of painkillers…and we survived. You’ve been the most constant pillar of my adult life. Whenever I take on a big hairy audacious goal (like writing a book), you always put your needs aside and make sure I’m well taken care of. You mean the world to me. Last, to my dad. I miss your visits and our camping trips to the woods. I wish you were here to share this with me, but I cherish the time we did have together. This book is for you.
xii
|
Preface
CHAPTER 1
Probably Approximately Correct Software
If you’ve ever flown on an airplane, you have participated in one of the safest forms of travel in the world. The odds of being killed in an airplane are 1 in 29.4 million, meaning that you could decide to become an airline pilot, and throughout a 40-year career, never once be in a crash. Those odds are staggering considering just how com‐ plex airplanes really are. But it wasn’t always that way. The year 2014 was bad for aviation; there were 824 aviation-related deaths, including the Malaysia Air plane that went missing. In 1929 there were 257 casualties. This makes it seem like we’ve become worse at aviation until you realize that in the US alone there are over 10 million flights per year, whereas in 1929 there were substan‐ tially fewer—about 50,000 to 100,000. This means that the overall probability of being killed in a plane wreck from 1929 to 2014 has plummeted from 0.25% to 0.00824%. Plane travel changed over the years and so has software development. While in 1929 software development as we know it didn’t exist, over the course of 85 years we have built and failed many software projects. Recent examples include software projects like the launch of healthcare.gov, which was a fiscal disaster, costing around $634 million dollars. Even worse are software projects that have other disastrous bugs. In 2013 NASDAQ shut down due to a soft‐ ware glitch and was fined $10 million USD. The year 2014 saw the Heartbleed bug infection, which made many sites using SSL vulnerable. As a result, CloudFlare revoked more than 100,000 SSL certificates, which they have said will cost them mil‐ lions. Software and airplanes share one common thread: they’re both complex and when they fail, they fail catastrophically and publically. Airlines have been able to ensure safe travel and decrease the probability of airline disasters by over 96%. Unfortunately
1
we cannot say the same about software, which grows ever more complex. Cata‐ strophic bugs strike with regularity, wasting billions of dollars. Why is it that airlines have become so safe and software so buggy?
Writing Software Right Between 1929 and 2014 airplanes have become more complex, bigger, and faster. But with that growth also came more regulation from the FAA and international bodies as well as a culture of checklists among pilots. While computer technology and hardware have rapidly changed, the software that runs it hasn’t. We still use mostly procedural and object-oriented code that doesn’t take full advantage of parallel computation. But programmers have made good strides toward coming up with guidelines for writing software and creating a culture of test‐ ing. These have led to the adoption of SOLID and TDD. SOLID is a set of principles that guide us to write better code, and TDD is either test-driven design or test-driven development. We will talk about these two mental models as they relate to writing the right software and talk about software-centric refactoring.
SOLID SOLID is a framework that helps design better object-oriented code. In the same ways that the FAA defines what an airline or airplane should do, SOLID tells us how soft‐ ware should be created. Violations of FAA regulations occasionally happen and can range from disastrous to minute. The same is true with SOLID. These principles sometimes make a huge difference but most of the time are just guidelines. SOLID was introduced by Robert Martin as the Five Principles. The impetus was to write better code that is maintainable, understandable, and stable. Michael Feathers came up with the mnemonic device SOLID to remember them. SOLID stands for: • Single Responsibility Principle (SRP) • Open/Closed Principle (OCP) • Liskov Substitution Principle (LSP) • Interface Segregation Principle (ISP) • Dependency Inversion Principle (DIP)
Single Responsibility Principle The SRP has become one of the most prevalent parts of writing good object-oriented code. The reason is that single responsibility defines simple classes or objects. The 2
|
Chapter 1: Probably Approximately Correct Software
same mentality can be applied to functional programming with pure functions. But the idea is all about simplicity. Have a piece of software do one thing and only one thing. A good example of an SRP violation is a multi-tool (Figure 1-1). They do just about everything but unfortunately are only useful in a pinch.
Figure 1-1. A multi-tool like this has too many responsibilities
Open/Closed Principle The OCP, sometimes also called encapsulation, is the principle that objects should be open for extending but not for modification. This can be shown in the case of a counter object that has an internal count associated with it. The object has the meth‐ ods increment and decrement. This object should not allow anybody to change the internal count unless it follows the defined API, but it can be extended (e.g., to notify someone of a count change by an object like Notifier).
Liskov Substitution Principle The LSP states that any subtype should be easily substituted out from underneath a object tree without side effect. For instance, a model car could be substituted for a real car.
Interface Segregation Principle The ISP is the principle that having many client-specific interfaces is better than a general interface for all clients. This principle is about simplifying the interchange of data between entities. A good example would be separating garbage, compost, and recycling. Instead of having one big garbage can it has three, specific to the garbage type.
Writing Software Right
|
3
Dependency Inversion Principle The DIP is a principle that guides us to depend on abstractions, not concretions. What this is saying is that we should build a layer or inheritance tree of objects. The example Robert Martin explains in his original paper1 is that we should have a Key boardReader inherit from a general Reader object instead of being everything in one class. This also aligns well with what Arthur Riel said in Object Oriented Design Heu‐ ristics about avoiding god classes. While you could solder a wire directly from a guitar to an amplifier, it most likely would be inefficient and not sound very good. The SOLID framework has stood the test of time and has shown up in many books by Martin and Feathers, as well as appearing in Sandi Metz’s book Practical Object-Oriented Design in Ruby. This framework is meant to be a guideline but also to remind us of the simple things so that when we’re writing code we write the best we can. These guidelines help write architectually correct software.
Testing or TDD In the early days of aviation, pilots didn’t use checklists to test whether their airplane was ready for takeoff. In the book The Right Stuff by Tom Wolfe, most of the original test pilots like Chuck Yeager would go by feel and their own ability to manage the complexities of the craft. This also led to a quarter of test pilots being killed in action.2 Today, things are different. Before taking off, pilots go through a set of checks. Some of these checks can seem arduous, like introducing yourself by name to the other crewmembers. But imagine if you find yourself in a tailspin and need to notify some‐ one of a problem immediately. If you didn’t know their name it’d be hard to commu‐ nicate. The same is true for good software. Having a set of systematic checks, running regu‐ larly, to test whether our software is working properly or not is what makes software operate consistently. In the early days of software, most tests were done after writing the original software (see also the waterfall model, used by NASA and other organizations to design soft‐ ware and test it for production). This worked well with the style of project manage‐ ment common then. Similar to how airplanes are still built, software used to be designed first, written according to specs, and then tested before delivery to the cus‐ tomer. But because technology has a short shelf life, this method of testing could take
1 Robert Martin, “The Dependency Inversion Principle,”. 2 Atul Gawande, The Checklist Manifesto (New York: Metropolitan Books), p. 161.
4
|
Chapter 1: Probably Approximately Correct Software
months or even years. This led to the Agile Manifesto as well as the culture of testing and TDD, spearheaded by Kent Beck, Ward Cunningham, and many others. The idea of test-driven development is simple: write a test to record what you want to achieve, test to make sure the test fails first, write the code to fix the test, and then, after it passes, fix your code to fit in with the SOLID guidelines. While many people argue that this adds time to the development cycle, it drastically reduces bug deficien‐ cies in code and improves its stability as it operates in production.3 Airplanes, with their low tolerance for failure, mostly operate the same way. Before a pilot flies the Boeing 787 they have spent X amount of hours in a flight simulator understanding and testing their knowledge of the plane. Before planes take off they are tested, and during the flight they are tested again. Modern software development is very much the same way. We test our knowledge by writing tests before deploying it, as well as when something is deployed (by monitoring). But this still leaves one problem: the reality that since not everything stays the same, writing a test doesn’t make good code. David Heinemer Hanson, in his viral presenta‐ tion about test-driven damage, has made some very good points about how following TDD and SOLID blindly will yield complicated code. Most of his points have to do with needless complication due to extracting out every piece of code into different classes, or writing code to be testable and not readable. But I would argue that this is where the last factor in writing software right comes in: refactoring.
Refactoring Refactoring is one of the hardest programming practices to explain to nonprogram‐ mers, who don’t get to see what is underneath the surface. When you fly on a plane you are seeing only 20% of what makes the plane fly. Underneath all of the pieces of aluminum and titanium are intricate electrical systems that power emergency lighting in case anything fails during flight, plumbing, trusses engineered to be light and also sturdy—too much to list here. In many ways explaining what goes into an airplane is like explaining to someone that there’s pipes under the sink below that beautiful faucet. Refactoring takes the existing structure and makes it better. It’s taking a messy circuit breaker and cleaning it up so that when you look at it, you know exactly what is going on. While airplanes are rigidly designed, software is not. Things change rapidly in software. Many companies are continuously deploying software to a production envi‐
3 Nachiappan Nagappan et al., “Realizing Quality Improvement through Test Driven Development: Results and
Experience of Four Industrial Teams,” Empirical Software Engineering 13, no. 3 (2008): 289–302, Nagappanetal.
Writing Software Right
|
5
ronment. All of that feature development can sometimes cause a certain amount of technical debt. Technical debt, also known as design debt or code debt, is a metaphor for poor system design that happens over time with software projects. The debilitating problem of technical debt is that it accrues interest and eventually blocks future feature develop‐ ment. If you’ve been on a project long enough, you will know the feeling of having fast releases in the beginning only to come to a standstill toward the end. Technical debt in many cases arises through not writing tests or not following the SOLID principles. Having technical debt isn’t a bad thing—sometimes projects need to be pushed out earlier so business can expand—but not paying down debt will eventually accrue enough interest to destroy a project. The way we get over this is by refactoring our code. By refactoring, we move our code closer to the SOLID guidelines and a TDD code‐ base. It’s cleaning up the existing code and making it easy for new developers to come in and work on the code that exists like so: 1. Follow the SOLID guidelines a. Single Responsibility Principle b. Open/Closed Principle c. Liskov Substitution Principle d. Interface Segregation Principle e. Dependency Inversion Principle 2. Implement TDD (test-driven development/design) 3. Refactor your code to avoid a buildup of technical debt The real question now is what makes the software right?
Writing the Right Software Writing the right software is much trickier than writing software right. In his book Specification by Example, Gojko Adzic determines the best approach to writing soft‐ ware is to craft specifications first, then to work with consumers directly. Only after the specification is complete does one write the code to fit that spec. But this suffers from the problem of practice—sometimes the world isn’t what we think it is. Our ini‐ tial model of what we think is true many times isn’t. Webvan, for instance, failed miserably at building an online grocery business. They had almost $400 million in investment capital and rapidly built infrastructure to sup‐ 6
|
Chapter 1: Probably Approximately Correct Software
port what they thought would be a booming business. Unfortunately they were a flop because of the cost of shipping food and the overestimated market for online grocery buying. By many measures they were a success at writing software and building a business, but the market just wasn’t ready for them and they quickly went bankrupt. Today a lot of the infrastructure they built is used by Amazon.com for AmazonFresh. In theory, theory and practice are the same. In practice they are not. —Albert Einstein
We are now at the point where theoretically we can write software correctly and it’ll work, but writing the right software is a much fuzzier problem. This is where machine learning really comes in.
Writing the Right Software with Machine Learning In The Knowledge-Creating Company, Nonaka and Takeuchi outlined what made Jap‐ anese companies so successful in the 1980s. Instead of a top-down approach of solv‐ ing the problem, they would learn over time. Their example of kneading bread and turning that into a breadmaker is a perfect example of iteration and is easily applied to software development. But we can go further with machine learning.
What Exactly Is Machine Learning? According to most definitions, machine learning is a collection of algorithms, techni‐ ques, and tricks of the trade that allow machines to learn from data—that is, some‐ thing represented in numerical format (matrices, vectors, etc.). To understand machine learning better, though, let’s look at how it came into exis‐ tence. In the 1950s extensive research was done on playing checkers. A lot of these models focused on playing the game better and coming up with optimal strategies. You could probably come up with a simple enough program to play checkers today just by working backward from a win, mapping out a decision tree, and optimizing that way. Yet this was a very narrow and deductive way of reasoning. Effectively the agent had to be programmed. In most of these early programs there was no context or irrational behavior programmed in. About 30 years later, machine learning started to take off. Many of the same minds started working on problems involving spam filtering, classification, and general data analysis. The important shift here is a move away from computerized deduction to computer‐ ized induction. Much as Sherlock Holmes did, deduction involves using complex
Writing the Right Software
|
7
logic models to come to a conclusion. By contrast, induction involves taking data as being true and trying to fit a model to that data. This shift has created many great advances in finding good-enough solutions to common problems. The issue with inductive reasoning, though, is that you can only feed the algorithm data that you know about. Quantifying some things is exceptionally difficult. For instance, how could you quantify how cuddly a kitten looks in an image? In the last 10 years we have been witnessing a renaissance around deep learning, which alleviates that problem. Instead of relying on data coded by humans, algo‐ rithms like autoencoders have been able to find data points we couldn’t quantify before. This all sounds amazing, but with all this power comes an exceptionally high cost and responsibility.
The High Interest Credit Card Debt of Machine Learning Recently, in a paper published by Google titled “Machine Learning: The High Interest Credit Card of Technical Debt”, Sculley et al. explained that machine learning projects suffer from the same technical debt issues outlined plus more (Table 1-1). They noted that machine learning projects are inherently complex, have vague boundaries, rely heavily on data dependencies, suffer from system-level spaghetti code, and can radically change due to changes in the outside world. Their argument is that these are specifically related to machine learning projects and for the most part they are. Instead of going through these issues one by one, I thought it would be more interest‐ ing to tie back to our original discussion of SOLID and TDD as well as refactoring and see how it relates to machine learning code. Table 1-1. The high interest credit card debt of machine learning Machine learning problem Entanglement
Manifests as Changing one factor changes everything
SOLID violation SRP
Hidden feedback loops
Having built-in hidden features in model
OCP
Undeclared consumers/visibility debt
ISP
Unstable data dependencies
Volatile data
ISP
Underutilized data dependencies
Unused dimensions
LSP
Correction cascade
*
Glue code
Writing code that does everything
SRP
Pipeline jungles
Sending data through complex workflow
DIP
Experimental paths
Dead paths that go nowhere
DIP
Configuration debt
Using old configurations for new data
*
8
|
Chapter 1: Probably Approximately Correct Software
Machine learning problem Fixed thresholds in a dynamic world
Manifests as SOLID violation Not being flexible to changes in correlations *
Correlations change
Modeling correlation over causation
ML Specific
SOLID Applied to Machine Learning SOLID, as you remember, is just a guideline reminding us to follow certain goals when writing object-oriented code. Many machine learning algorithms are inherently not object oriented. They are functional, mathematical, and use lots of statistics, but that doesn’t have to be the case. Instead of thinking of things in purely functional terms, we can strive to use objects around each row vector and matrix of data.
SRP In machine learning code, one of the biggest challenges for people to realize is that the code and the data are dependent on each other. Without the data the machine learning algorithm is worthless, and without the machine learning algorithm we wouldn’t know what to do with the data. So by definition they are tightly intertwined and coupled. This tightly coupled dependency is probably one of the biggest reasons that machine learning projects fail. This dependency manifests as two problems in machine learning code: entanglement and glue code. Entanglement is sometimes called the principle of Changing Anything Changes Everything or CACE. The simplest example is probabilities. If you remove one probability from a distribution, then all the rest have to adjust. This is a violation of SRP. Possible mitigation strategies include isolating models, analyzing dimensional depen‐ dencies,4 and regularization techniques.5 We will return to this problem when we review Bayesian models and probability models. Glue code is the code that accumulates over time in a coding project. Its purpose is usually to glue two separate pieces together inelegantly. It also tends to be the type of code that tries to solve all problems instead of just one. Whether machine learning researchers want to admit it or not, many times the actual machine learning algorithms themselves are quite simple. The surrounding code is what makes up the bulk of the project. Depending on what library you use, whether it be GraphLab, MATLAB, scikit-learn, or R, they all have their own implementation of vectors and matrices, which is what machine learning mostly comes down to.
4 H. B. McMahan et al., “Ad Click Prediction: A View from the Trenches.” In The 19th ACM SIGKDD Interna‐
tional Conference on Knowledge Discovery and Data Mining, KDD 2013, Chicago, IL, August 11–14, 2013.
5 A. Lavoie et al., “History Dependent Domain Adaptation.” In Domain Adaptation Workshop at NIPS ’11, 2011.
Writing the Right Software
|
9
OCP Recall that the OCP is about opening classes for extension but not modification. One way this manifests in machine learning code is the problem of CACE. This can mani‐ fest in any software project but in machine learning projects it is often seen as hidden feedback loops. A good example of a hidden feedback loop is predictive policing. Over the last few years, many researchers have shown that machine learning algorithms can be applied to determine where crimes will occur. Preliminary results have shown that these algo‐ rithms work exceptionally well. But unfortunately there is a dark side to them as well. While these algorithms can show where crimes will happen, what will naturally occur is the police will start patrolling those areas more and finding more crimes there, and as a result will self-reinforce the algorithm. This could also be called confirmation bias, or the bias of confirming our preconceived notion, and also has the downside of enforcing systematic discrimination against certain demographics or neighborhoods. While hidden feedback loops are hard to detect, they should be watched for with a keen eye and taken out.
LSP Not a lot of people talk about the LSP anymore because many programmers are advo‐ cating for composition over inheritance these days. But in the machine learning world, the LSP is violated a lot. Many times we are given data sets that we don’t have all the answers for yet. Sometimes these data sets are thousands of dimensions wide. Running algorithms against those data sets can actually violate the LSP. One common manifestation in machine learning code is underutilized data dependencies. Many times we are given data sets that include thousands of dimensions, which can some‐ times yield pertinent information and sometimes not. Our models might take all dimensions yet use one infrequently. So for instance, in classifying mushrooms as either poisonous or edible, information like odor can be a big indicator while ring number isn’t. The ring number has low granularity and can only be zero, one, or two; thus it really doesn’t add much to our model of classifying mushrooms. So that infor‐ mation could be trimmed out of our model and wouldn’t greatly degrade perfor‐ mance. You might be thinking why this is related to the LSP, and the reason is if we can use only the smallest set of datapoints (or features), we have built the best model possible. This also aligns well with Ockham’s Razor, which states that the simplest solution is the best one.
10
|
Chapter 1: Probably Approximately Correct Software
ISP The ISP is the notion that a client-specific interface is better than a general purpose one. In machine learning projects this can often be hard to enforce because of the tight coupling of data to the code. In machine learning code, the ISP is usually viola‐ ted by two types of problems: visibility debt and unstable data. Take for instance the case where a company has a reporting database that is used to collect information about sales, shipping data, and other pieces of crucial informa‐ tion. This is all managed through some sort of project that gets the data into this database. The customer that this database defines is a machine learning project that takes previous sales data to predict the sales for the future. Then one day during cleanup, someone renames a table that used to be called something very confusing to something much more useful. All hell breaks loose and people are wondering what happened. What ended up happening is that the machine learning project wasn’t the only con‐ sumer of the data; six Access databases were attached to it, too. The fact that there were that many undeclared consumers is in itself a piece of debt for a machine learn‐ ing project. This type of debt is called visibility debt and while it mostly doesn’t affect a project’s stability, sometimes, as features are built, at some point it will hold everything back. Data is dependent on the code used to make inductions from it, so building a stable project requires having stable data. Many times this just isn’t the case. Take for instance the price of a stock; in the morning it might be valuable but hours later become worthless. This ends up violating the ISP because we are looking at the general data stream instead of one specific to the client, which can make portfolio trading algorithms very difficult to build. One common trick is to build some sort of exponential weighting scheme around data; another more important one is to version data streams. This versioned scheme serves as a viable way to limit the volatility of a model’s predictions.
DIP The Dependency Inversion Principle is about limiting our buildups of data and mak‐ ing code more flexible for future changes. In a machine learning project we see con‐ cretions happen in two specific ways: pipeline jungles and experimental paths. Pipeline jungles are common in data-driven projects and are almost a form of glue code. This is the amalgamation of data being prepared and moved around. In some cases this code is tying everything together so the model can work with the prepared data. Unfortunately, though, over time these jungles start to grow complicated and unusable.
Writing the Right Software
|
11
Machine learning code requires both software and data. They are intertwined and inseparable. Sometimes, then, we have to test things during production. Sometimes tests on our machines give us false hope and we need to experiment with a line of code. Those experimental paths add up over time and end up polluting our work‐ space. The best way of reducing the associated debt is to introduce tombstoning, which is an old technique from C. Tombstones are a method of marking something as ready to be deleted. If the method is called in production it will log an event to a logfile that can be used to sweep the codebase later. For those of you who have studied garbage collection you most likely have heard of this method as mark and sweep. Basically you mark an object as ready to be deleted and later sweep marked objects out.
Machine Learning Code Is Complex but Not Impossible At times, machine learning code can be difficult to write and understand, but it is far from impossible. Remember the flight analogy we began with, and use the SOLID guidelines as your “preflight” checklist for writing successful machine learning code —while complex, it doesn’t have to be complicated. In the same vein, you can compare machine learning code to flying a spaceship—it’s certainly been done before, but it’s still bleeding edge. With the SOLID checklist model, we can launch our code effectively using TDD and refactoring. In essence, writing successful machine learning code comes down to being disciplined enough to follow the principles of design we’ve laid out in this chapter, and writing tests to sup‐ port your code-based hypotheses. Another critical element in writing effective code is being flexible and adapting to the changes it will encounter in the real world.
TDD: Scientific Method 2.0 Every true scientist is a dreamer and a skeptic. Daring to put a person on the moon was audacious, but through systematic research and development we have accom‐ plished that and much more. The same is true with machine learning code. Some of the applications are fascinating but also hard to pull off. The secret to doing so is to use the checklist of SOLID for machine learning and the tools of TDD and refactoring to get us there. TDD is more of a style of problem solving, not a mandate from above. What testing gives us is a feedback loop that we can use to work through tough problems. As scien‐ tists would assert that they need to first hypothesize, test, and theorize, we can assert that as a TDD practitioner, the process of red (the tests fail), green (the tests pass), refactor is just as viable.
12
|
Chapter 1: Probably Approximately Correct Software
This book will delve heavily into applying not only TDD but also SOLID principles to machine learning, with the goal being to refactor our way to building a stable, scala‐ ble, and easy-to-use model.
Refactoring Our Way to Knowledge As mentioned, refactoring is the ability to edit one’s work and to rethink what was once stated. Throughout the book we will talk about refactoring common machine learning pitfalls as it applies to algorithms.
The Plan for the Book This book will cover a lot of ground with machine learning, but by the end you should have a better grasp of how to write machine learning code as well as how to deploy to a production environment and operate at scale. Machine learning is a fasci‐ nating field that can achieve much, but without discipline, checklists, and guidelines, many machine learning projects are doomed to fail. Throughout the book we will tie back to the original principles in this chapter by talking about SOLID principles, testing our code (using various means), and refactor‐ ing as a way to continually learn from and improve the performance of our code. Every chapter will explain the Python packages we will use and describe a general testing plan. While machine learning code isn’t testable in a one-to-one case, it ends up being something for which we can write tests to help our knowledge of the problem.
The Plan for the Book
|
13
CHAPTER 2
A Quick Introduction to Machine Learning
You’ve picked up this book because you’re interested in machine learning. While you probably have an idea of what machine learning is, the subject is often defined some‐ what vaguely. In this quick introduction, I’ll go over what exactly machine learning is, and provide a general framework for thinking about machine learning algorithms.
What Is Machine Learning? Machine learning is the intersection between theoretically sound computer science and practically noisy data. Essentially, it’s about machines making sense out of data in much the same way that humans do. Machine learning is a type of artificial intelligence whereby an algorithm or method extracts patterns from data. Machine learning solves a few general problems; these are listed in Table 2-1 and described in the subsections that follow. Table 2-1. The problems that machine learning can solve Problem Machine learning category Fitting some data to a function or function approximation Supervised learning Figuring out what the data is without any feedback
Unsupervised learning
Maximizing rewards over time
Reinforcement learning
Supervised Learning Supervised learning, or function approximation, is simply fitting data to a function of any variety. For instance, given the noisy data shown in Figure 2-1, you can fit a line that generally approximates it.
15
Figure 2-1. This data fits quite well to a straight line
Unsupervised Learning Unsupervised learning involves figuring out what makes the data special. For instance, if we were given many data points, we could group them by similarity (Figure 2-2), or perhaps determine which variables are better than others.
16
|
Chapter 2: A Quick Introduction to Machine Learning
Figure 2-2. Two clusters grouped by similarity
Reinforcement Learning Reinforcement learning involves figuring out how to play a multistage game with rewards and payoffs. Think of it as the algorithms that optimize the life of something. A common example of a reinforcement learning algorithm is a mouse trying to find cheese in a maze. For the most part, the mouse gets zero reward until it finally finds the cheese. We will discuss supervised and unsupervised learning in this book but skip reinforce‐ ment learning. In the final chapter, I include some resources that you can check out if you’d like to learn more about reinforcement learning.
What Can Machine Learning Accomplish? What makes machine learning unique is its ability to optimally figure things out. But each machine learning algorithm has quirks and trade-offs. Some do better than oth‐ ers. This book covers quite a few algorithms, so Table 2-2 provides a matrix to help you navigate them and determine how useful each will be to you. Table 2-2. Machine learning algorithm matrix Algorithm K-Nearest Neighbors
Learning type Class Supervised Instance based
Restriction bias Generally speaking, KNN is good for measuring distance-based approximations; it suffers from the curse of dimensionality
Preference bias Prefers problems that are distance based
Naive Bayes
Supervised
Works on problems where the inputs are independent from each other
Prefers problems where the probability will always be greater than zero for each class
Probabilistic
Reinforcement Learning
|
17
Algorithm Decision Trees/ Random Forests
Learning type Class Supervised Tree
Restriction bias Preference bias
Will work on just about anything Prefers data that isn’t highly variable
Refer to this matrix throughout the book to understand how these algorithms relate to one another. Machine learning is only as good as what it applies to, so let’s get to implementing some of these algorithms! Before we get started, you will need to install Python, which you can do at. This book was tested using Python 2.7.12, but most likely it will work with Python 3.x as well. All of those changes will be annotated in the book’s coding resources, which are available on Git‐ Hub.
Mathematical Notation Used Throughout the Book This book uses mathematics to solve problems, but all of the examples are programmer-centric. Throughout the book, I’ll use the mathematical notations shown in Table 2-3.
18
|
Chapter 2: A Quick Introduction to Machine Learning
Table 2-3. Mathematical notations used in this book’s examples Symbol
∑ni = 0 x i ǀxǀ
4
How do you say it? The sum of all xs from x0 to xn
What does it do? This is the same thing as x0 + x1 + ⋯ + xn.
The absolute value of x
This takes any value of x and makes it positive. So |–x| = |x|.
The square root of 4
This is the opposite of 22.
zk = < 0.5, 0.5 > Vector zk equals 0.5 and 0.5
This is a point on the xy plane and is denoted as a vector, which is a group of numerical points.
log2(2)
Log 2
This solves for i in 2i = 2.
P(A)
Probability of A
In many cases, this is the count of A divided by the total occurrences.
P(AǀB)
Probability of A given B
This is the probability of A and B divided by the probability of B.
{1,2,3} ∩ {1}
The intersection of set one and two This turns into a set {1}.
{1,2,3} ∪ {4,1}
The union of set one and two
This equates to {1,2,3,4}.
det(C)
The determinant of the matrix C
This will help determine whether a matrix is invertible or not.
a∝b
a is proportional to b
This means that m · a = b.
min f(x)
Minimize f(x)
This is an objective function to minimize the function f(x).
XT
Transpose of the matrix X
Take all elements of the matrix and switch the row with the column.
Conclusion This isn’t an exhaustive introduction to machine learning, but that’s okay. There’s always going to be a lot for us all to learn when it comes to this complex subject, but for the remainder of this book, this should serve us well in approaching these problems.
Conclusion
|
19
CHAPTER 3
K-Nearest Neighbors
Have you ever bought a house before? If you’re like a lot of people around the world, the joy of owning your own home is exciting, but the process of finding and buying a house can be stressful. Whether we’re in a economic boom or recession, everybody wants to get the best house for the most reasonable price. But how would you go about buying a house? How do you appraise a house? How does a company like Zillow come up with their Zestimates? We’ll spend most of this chapter answering questions related to this fundamental concept: distance-based approximations. First we’ll talk about how we can estimate a house’s value. Then we’ll discuss how to classify houses into categories such as “Buy,” “Hold,” and “Sell.” At that point we’ll talk about a general algorithm, K-Nearest Neighbors, and how it can be used to solve problems such as this. We’ll break it down into a few sections of what makes some‐ thing near, as well as what a neighborhood really is (i.e., what is the optimal K for something?).
How Do You Determine Whether You Want to Buy a House? This question has plagued many of us for a long time. If you are going out to buy a house, or calculating whether it’s better to rent, you are most likely trying to answer this question implicitly. Home appraisals are a tricky subject, and are notorious for drift with calculations. For instance on Zillow’s website they explain that their famous Zestimate is flawed. They state that based on where you are looking, the value might drift by a localized amount.
21
Location is really key with houses. Seattle might have a different demand curve than San Francisco, which makes complete sense if you know housing! The question of whether to buy or not comes down to value amortized over the course of how long you’re living there. But how do you come up with a value?
How Valuable Is That House? Things are worth as much as someone is willing to pay. —Old Saying
Valuing a house is tough business. Even if we were able to come up with a model with many endogenous variables that make a huge difference, it doesn’t cover up the fact that buying a house is subjec‐ tive and sometimes includes a bidding war. These are almost impossible to predict. You’re more than welcome to use this to value houses, but there will be errors that take years of experience to overcome.
A house is worth as much as it’ll sell for. The answer to how valuable a house is, at its core, is simple but difficult to estimate. Due to inelastic supply, or because houses are all fairly unique, home sale prices have a tendency to be erratic. Sometimes you just love a house and will pay a premium for it. But let’s just say that the house is worth what someone will pay for it. This is a func‐ tion based on a bag of attributes associated with houses. We might determine that a good approach to estimating house values would be: Equation 3-1. House value HouseValue = f Space, LandSize, Rooms, Bathrooms, ⋯
This model could be found through regression (which we’ll cover in Chapter 5) or other approximation algorithms, but this is missing a major component of real estate: “Location, Location, Location!” To overcome this, we can come up with something called a hedonic regression.
Hedonic Regression You probably already know of a frequently used real-life hedonic regression: the CPI index. This is used as a way of decomposing baskets of items that people commonly buy to come up with an index for inflation.
22
|
Chapter 3: K-Nearest Neighbors
Economics is a dismal science because we’re trying to approximate rational behaviors. Unfortunately we are predictably irrational (shout-out to Dan Ariely). But a good algorithm for valuing houses that is similar to what home appraisers use is called hedonic regression. The general idea with hard-to-value items like houses that don’t have a highly liquid market and suffer from subjectivity is that there are externalities that we can’t directly estimate. For instance, how would you estimate pollution, noise, or neighbors who are jerks? To overcome this, hedonic regression takes a different approach than general regres‐ sion. Instead of focusing on fitting a curve to a bag of attributes, it focuses on the components of a house. For instance, the hedonic method allows you to find out how much a bedroom costs (on average). Take a look at the Table 3-1, which compares housing prices with number of bed‐ rooms. From here we can fit a naive approximation of value to bedroom number, to come up with an estimate of cost per bedroom. Table 3-1. House price by number of bedrooms Price (in $1,000) Bedrooms $899 4 $399
3
$749
3
$649
3
This is extremely useful for valuing houses because as consumers, we can use this to focus on what matters to us and decompose houses into whether they’re overpriced because of bedroom numbers or the fact that they’re right next to a park. This gets us to the next improvement, which is location. Even with hedonic regres‐ sion, we suffer from the problem of location. A bedroom in SoHo in London, Eng‐ land is probably more expensive than a bedroom in Mumbai, India. So for that we need to focus on the neighborhood.
What Is a Neighborhood? The value of a house is often determined by its neighborhood. For instance, in Seattle, an apartment in Capitol Hill is more expensive than one in Lake City. Generally
What Is a Neighborhood?
|
23
speaking, the cost of commuting is worth half of your hourly wage plus maintenance and gas,1 so a neighborhood closer to the economic center is more valuable. But how would we focus only on the neighborhood? Theoretically we could come up with an elegant solution using something like an exponential decay function that weights houses closer to downtown higher and far‐ ther houses lower. Or we could come up with something static that works exception‐ ally well: K-Nearest Neighbors.
K-Nearest Neighbors What if we were to come up with a solution that is inelegant but works just as well? Say we were to assert that we will only look at an arbitrary amount of houses near to a similar house we’re looking at. Would that also work? Surprisingly, yes. This is the K-Nearest Neighbor (KNN) solution, which performs exceptionally well. It takes two forms: a regression, where we want a value, or a classi‐ fication. To apply KNN to our problem of house values, we would just have to find the nearest K neighbors. The KNN algorithm was originally introduced by Drs. Evelyn Fix and J. L. Hodges Jr, in an unpublished technical report written for the U.S. Air Force School of Aviation Medicine. Fix and Hodges’ original research focused on splitting up classification problems into a few subproblems: • Distributions F and G are completely known. • Distributions F and G are completely known except for a few parameters. • F and G are unknown, except possibly for the existence of densities. Fix and Hodges pointed out that if you know the distributions of two classifications or you know the distribution minus some parameters, you can easily back out useful solutions. Therefore, they focused their work on the more difficult case of finding classifications among distributions that are unknown. What they came up with laid the groundwork for the KNN algorithm. This opens a few more questions: • What are neighbors, and what makes them near? • How do we pick the arbitrary number of neighbors, K?
1 Van Ommeren et al., “Estimating the Marginal Willingness to Pay for Commuting,” Journal of Regional Sci‐
ence 40 (2000): 541–63.
24
|
Chapter 3: K-Nearest Neighbors
• What do we do with the neighbors afterward?
Mr. K’s Nearest Neighborhood We all implicitly know what a neighborhood is. Whether you live in the woods or a row of brownstones, we all live in a neighborhood of sorts. A neighborhood for lack of a better definition could just be called a cluster of houses (we’ll get to clustering later). A cluster at this point could be just thought of as a tight grouping of houses or items in n dimensions. But what denotes a “tight grouping”? Since you’ve most likely taken a geometry class at some time in your life, you’re probably thinking of the Pythagor‐ ean theorem or something similar, but things aren’t quite that simple. Distances are a class of functions that can be much more complex.
Distances As the crow flies. —Old Saying
Geometry class taught us that if you sum the square of two sides of a triangle and take its square root, you’ll have the side of the hypotenuse or the third side (Figure 3-1). This as we all know is the Pythagorean theorem, but distances can be much more complicated. Distances can take many different forms but generally there are geomet‐ rical, computational, and statistical distances which we’ll discuss in this section.
Figure 3-1. Pythagorean theorem
Triangle Inequality One interesting aspect about the triangle in Figure 3-1 is that the length of the hypo‐ tenuse is always less than the length of each side added up individually (Figure 3-2).
Figure 3-2. Triangle broken into three line segments Mr. K’s Nearest Neighborhood
|
25
Stated mathematically: ∥x∥ + ∥y∥ ≤ ∥x∥ + ∥y∥. This inequality is important for find‐ ing a distance function; if the triangle inequality didn’t hold, what would happen is distances would become slightly distorted as you measure distance between points in a Euclidean space.
Geometrical Distance The most intuitive distance functions are geometrical. Intuitively we can measure how far something is from one point to another. We already know about the Pytha‐ gorean theorem, but there are an infinite amount of possibilities that satisfy the trian‐ gle inequality. Stated mathematically we can take the Pythagorean theorem and build what is called the Euclidean distance, which is denoted as: d x, y = ∑ni = 0 xi − yi
2
As you can see, this is similar to the Pythagorean theorem, except it includes a sum. Mathematics gives us even greater ability to build distances by using something called a Minkowski distance (see Figure 3-3): d p x, y = ∑ni = 0 xi − yi
1 p p
This p can be any integer and still satisfy the triangle inequality.
Figure 3-3. Minkowski distances as n increases (Source: Wikimedia)
26
|
Chapter 3: K-Nearest Neighbors
Cosine similarity One last geometrical distance is called cosine similarity or cosine distance. The beauty of this distance is its sheer speed at calculating distances between sparse vec‐ tors. For instance if we had 1,000 attributes collected about houses and 300 of these were mutually exclusive (meaning that one house had them but the others don’t), then we would only need to include 700 dimensions in the calculation. Visually this measures the inner product space between two vectors and presents us with cosine as a measure. Its function is: d x, y =
x·y ∥x∥ ∥y∥
where ∥x∥ denotes the Euclidean distance discussed earlier. Geometrical distances are generally what we want. When we talk about houses we want a geometrical distance. But there are other spaces that are just as valuable: com‐ putational, or discrete, as well as statistical distances.
Computational Distances Imagine you want to measure how far it is from one part of the city to another. One way of doing this would be to utilize coordinates (longitude, latitude) and calculate a Euclidean distance. Let’s say you’re at Saint Edward State Park in Kenmore, WA (47.7329290, -122.2571466) and you want to meet someone at Vivace Espresso on Capitol Hill, Seattle, WA (47.6216650, -122.3213002). Using the Euclidean distance we would calculate: 47 . 73 − 47 . 62 2 + − 122 . 26 + 122 . 32
2
≈ 0 . 13
This is obviously a small result as it’s in degrees of latitude and longitude. To convert this into miles we would multiply it by 69.055, which yields approximately 8.9 miles (14.32 kilometers). Unfortunately this is way off! The actual distance is 14.2 miles (22.9 kilometers). Why are things so far off? Note that 69.055 is actually an approximation of latitude degrees to miles. Earth is an ellipsoid and therefore calculating distances actually depends on where you are in the world. But for such a short distance it’s good enough.
Distances
|
27
If I had the ability to lift off from Saint Edward State Park and fly to Vivace then, yes, it’d be shorter, but if I were to walk or drive I’d have to drive around Lake Washington (see Figure 3-4). This gets us to the motivation behind computational distances. If you were to drive from Saint Edward State Park to Vivace then you’d have to follow the constraints of a road.
Figure 3-4. Driving to Vivace from Saint Edward State Park
28
|
Chapter 3: K-Nearest Neighbors
Manhattan distance This gets us into what is called the Taxicab distance or Manhattan distance. Equation 3-2. Manhattan distance ∑ni = 0 xi − yi
Note that there is no ability to travel out of bounds. So imagine that your metric space is a grid of graphing paper and you are only allowed to draw along the boxes. The Manhattan distance can be used for problems such as traversal of a graph and discrete optimization problems where you are constrained by edges. With our hous‐ ing example, most likely you would want to measure the value of houses that are close by driving, not by flying. Otherwise you might include houses in your search that are across a barrier like a lake, or a mountain!
Levenshtein distance Another distance that is commonly used in natural language processing is the Lev‐ enshtein distance. An analogy of how Levenshtein distance works is by changing one neighborhood to make an exact copy of another. The number of steps to make that happen is the distance. Usually this is applied with strings of characters to determine how many deletions, additions, or substitutions the strings require to be equal. This can be quite useful for determining how similar neighborhoods are as well as strings. The formula for this is a bit more complicated as it is a recursive function, so instead of looking at the math we’ll just write Python for this: def lev(a, b): if not a: return len(b) if not b: return len(a) return min(lev(a[1:], b[1:])+(a[0] != b[0]), lev(a[1:], b)+1, lev(a, b[1:])+1)
This is an extremely slow algorithm and I’m only putting it here for understanding, not to actually implement. If you’d like to imple‐ ment Levenshtein, you will need to use dynamic programming to have good performance.
Statistical Distances Last, there’s a third class of distances that I call statistical distances. In statistics we’re taught that to measure volatility or variance, we take pairs of datapoints and measure the squared difference. This gives us an idea of how dispersed the population is. This can actually be used when calculating distance as well, using what is called the Maha‐ lanobis distance. Distances
|
29
Imagine for a minute that you want to measure distance in an affluent neighborhood that is right on the water. People love living on the water and the closer you are to it, the higher the home value. But with our distances discussed earlier, whether compu‐ tational or geometrical, we would have a bad approximation of this particular neigh‐ borhood because those distance calculations are primarily spherical in nature (Figures 3-5 and 3-6).
Figure 3-5. Driving from point A to point B on a city block
Figure 3-6. Straight line between A and B This seems like a bad approach for this neighborhood because it is not spherical in nature. If we were to use Euclidean distances we’d be measuring values of houses not on the beach. If we were to use Manhattan distances we’d only look at houses close by the road.
Mahalanobis distance Another approach is using the Mahalanobis distance. This takes into consideration some other statistical factors: 30
|
Chapter 3: K-Nearest Neighbors
d x, y = ∑ni = 1
xi − y i
2
s2i
What this effectively does is give more stretch to the grouping of items (Figure 3-7):
Figure 3-7. Mahalanobis distance
Jaccard distance Yet another distance metric is called the Jaccard distance. This takes into considera‐ tion the population of overlap. For instance, if the number of attributes for one house match another, then they would be overlapping and therefore close in distance, whereas if the houses had diverging attributes they wouldn’t match. This is primarily used to quickly determine how similar text is by counting up the frequencies of letters in a string and then counting the characters that are not the same across both. Its for‐ mula is: J X, Y =
X∩Y X∪Y
This finishes up a primer on distances. Now that we know how to measure what is close and what is far, how do we go about building a grouping or neighborhood? How many houses should be in the neighborhood?
Curse of Dimensionality Before we continue, there’s a serious concern with using distances for anything and that is called the curse of dimensionality. When we model high-dimension spaces, our approximations of distance become less reliable. In practice it is important to realize that finding features of data sets is essential to making a resilient model. We will talk about feature engineering in Chapter 10 but for now be cognizant of the problem. Figure 3-8 shows a visual way of thinking about this.
Curse of Dimensionality
|
31
Figure 3-8. Curse of dimensionality As Figure 3-8 shows, when we put random dots on a unit sphere and measure the distance from the origin (0,0,0), we find that the distance is always 1. But if we were to project that onto a 2D space, the distance would be less than or equal to 1. This same truth holds when we expand the dimensions. For instance, if we expanded our set from 3 dimensions to 4, it would be greater than or equal to 1. This inability to center in on a consistent distance is what breaks distance-based models, because all of the data points become chaotic and move away from one another.
How Do We Pick K? Picking the number of houses to put into this model is a difficult problem—easy to verify but hard to calculate beforehand. At this point we know how we want to group things, but just don’t know how many items to put into our neighborhood. There are a few approaches to determining an optimal K, each with their own set of downsides: • Guessing • Using a heuristic • Optimizing using an algorithm
Guessing K Guessing is always a good solution. Many times when we are approaching a problem, we have domain knowledge of it. Whether we are an expert or not, we know about the problem enough to know what a neighborhood is. My neighborhood where I live, for instance, is roughly 12 houses. If I wanted to expand I could set my K to 30 for a more flattened-out approximation. 32
|
Chapter 3: K-Nearest Neighbors
Heuristics for Picking K There are three heuristics that can help you determine an optimal K for a KNN algorithm: 1. Use coprime class and K combinations 2. Choose a K that is greater or equal to the number of classes plus one 3. Choose a K that is low enough to avoid noise
Use coprime class and K combinations Picking coprime numbers of classes and K will ensure fewer ties. Coprime numbers are two numbers that don’t share any common divisors except for 1. So, for instance, 4 and 9 are coprime while 3 and 9 are not. Imagine you have two classes, good and bad. If we were to pick a K of 6, which is even, then we might end up having ties. Graphically it looks like Figure 3-9.
Figure 3-9. Tie with K=6 and two classes If you picked a K of 5 instead (Figure 3-10), there wouldn’t be a tie.
Figure 3-10. K=5 with two classes and no tie
How Do We Pick K?
|
33
Choose a K that is greater or equal to the number of classes plus one Imagine there are three classes: lawful, chaotic, and neutral. A good heuristic is to pick a K of at least 3 because anything less will mean that there is no chance that each class will be represented. To illustrate, Figure 3-11 shows the case of K=2.
Figure 3-11. With K=2 there is no possibility that all three classes will be represented Note how there are only two classes that get the chance to be used. Again, this is why we need to use at least K=3. But based on what we found in the first heuristic, ties are not a good thing. So, really, instead of K=3, we should use K=4 (as shown in Figure 3-12).
Figure 3-12. With K set greater than the number of classes, there is a chance for all classes to be represented
Choose a K that is low enough to avoid noise As K increases, you eventually approach the size of the entire data set. If you were to pick the entire data set, you would select the most common class. A simple example is mapping a customer’s affinity to a brand. Say you have 100 orders as shown in Table 3-2.
34
| Chapter 3: K-Nearest Neighbors
Table 3-2. Brand to count Brand Widget Inc.
Count 30
Bozo Group
23
Robots and Rockets 12 Ion 5
35
Total
100
If we were to set K=100, our answer will always be Ion 5 because Ion 5 is the distribu‐ tion (the most common class) of the order history. That is not really what we want; instead, we want to determine the most recent order affinity. More specifically, we want to minimize the amount of noise that comes into our classification. Without coming up with a specific algorithm for this, we can justify K being set to a much lower rate, like K=3 or K=11.
Algorithms for picking K Picking K can be somewhat qualitative and nonscientific, and that’s why there are many algorithms showing how to optimize K over a given training set. There are many approaches to choosing K, ranging from genetic algorithms to brute force to grid searches. Many people assert that you should determine K based on domain knowledge that you have as the implementor. For instance, if you know that 5 is good enough, you can pick that. This problem where you are trying to minimize error based on an arbitrary K is known as a hill climbing problem. The idea is to iterate through a couple of possible Ks until you find a suitable error. The difficult part about finding a K using an approach like genetic algorithms or brute force is that as K increases, the complexity of the classification also increases and slows down perfor‐ mance. In other words, as you increase K, the program actually gets slower. If you want to learn more about genetic algorithms applied to finding an optimal K, you can read more about it in Nigsch et al.’s Journal of Chemical Information and Modeling article, “Melting Point Prediction Employing k-Nearest Neighbor Algorithms and Genetic Parameter Optimization.” Personally, I think iterating twice through 1% of the population size is good enough. You should have a decent idea of what works and what doesn’t just by experimenting with different Ks.
Valuing Houses in Seattle Valuing houses in Seattle is a tough gamble. According to Zillow, their Zestimate is consistently off in Seattle. Regardless, how would we go about building something that tells us how valuable the houses are in Seattle? This section will walk through a simple example so that you can figure out with reasonable accuracy what a house is worth based on freely available data from the King County Assessor. Valuing Houses in Seattle
|
35
If you’d like to follow along in the code examples, check out the GitHub repo.
About the Data While the data is freely available, it wasn’t easy to put together. I did a bit of cajoling to get the data well formed. There’s a lot of features, ranging from inadequate parking to whether the house has a view of Mount Rainier or not. I felt that while that was an interesting exercise, it’s not really important to discuss here. In addition to the data they gave us, geolocation has been added to all of the datapoints so we can come up with a location distance much easier.
General Strategy Our general strategy for finding the values of houses in Seattle is to come up with something we’re trying to minimize/maximize so we know how good the model is. Since we will be looking at house values explicitly, we can’t calculate an “Accuracy” rate because every value will be different. So instead we will utilize a different metric called mean absolute error. With all models, our goal is to minimize or maximize something, and in this case we’re going to minimize the mean absolute error. This is defined as the average of the absolute errors. The reason we’ll use absolute error over any other common metrics (like mean squared error) is that it’s useful. When it comes to house values it’s hard to get intuition around the average squared error, but by using absolute error we can instead say that our model is off by $70,000 or similar on average. As for unit testing and functional testing, we will approach this in a random fashion by stratifying the data into multiple chunks so that we can sample the mean absolute errors. This is mainly so that we don’t find just one weird case where the mean abso‐ lute error was exceptionally low. We will not be talking extensively here about unit testing because this is an early chapter and I feel that it’s more important to focus on the overall testing of the model through mean absolute error.
Coding and Testing Design The basic design of the code for this chapter is going to center around a Regressor class. This class will take in King County housing data that comes in via a flat, calcu‐ late an error rate, and do the regression (Figure 3-13). We will not be doing any unit testing in this chapter but instead will visually test the code using the plot_error function we will build inside of the Regressor class.
36
|
Chapter 3: K-Nearest Neighbors
Figure 3-13. Overall coding design For this chapter we will determine success by looking at the nuances of how our regressor works as we increase folds.
KNN Regressor Construction To construct our KNN regression we will utilize something called a KDTree. It’s not essential that you know how these work but the idea is the KDTree will store data in a easily queriable fashion based on distance. The distance metric we will use is the Euclidean distance since it’s easy to compute and will suit us just fine. You could try many other metrics to see whether the error rate was better or worse.
A Note on Packages You’ll note that we’re using quite a few packages. Python has excellent tools available to do anything data science related such as NumPy, Pandas, scikit-learn, SciPy, and others. Pandas and NumPy work together to build what is at its core a multidimensional array but operates similar to an SQL database in that you can query it. Pandas is the query interface and NumPy is the numerical processing underneath. You will also find other useful tools inside of the NumPy library. scikit-learn is a collection of machine learning tools available for common algorithms (that we will be talking about in this book). SciPy is a scientific computing library that allows us to do things like use a KDTree. As we progress in the book we will rely heavily on these libraries. from pandas import Series, DataFrame import pandas as pd import numpy as np import numpy.random as npr import random from scipy.spatial import KDTree
Valuing Houses in Seattle
|
37
from sklearn.metrics import mean_absolute_error import sys sys.setrecursionlimit(10000) class Regression: def __init__(self, csv_file = None, data = None, values = None): if (data is None and csv_file is not None): df = pd.read_csv(csv_file) self.values = df['AppraisedValue'] df = df.drop('AppraisedValue', 1) df = (df - df.mean()) / (df.max() - df.min()) self.df = df self.df = self.df[['lat', 'long', 'SqFtLot'] elif (data is not None and values is not None): self.df = data self.values = values else: raise ValueError("Must have either csv_file or data set") self.n = len(self.df) self.kdtree = KDTree(self.df) self.metric = np.mean self.k = 5
Do note that we had to set the recursion limit higher since KDTree will recurse and throw an error otherwise. There’s a few things we’re doing here I thought we should discuss. One of them is the idea of normalizing data. This is a great trick to make all of the data similar. Other‐ wise, what will happen is that we find something close that really shouldn’t be, or the bigger numbered dimensions will skew results. On top of that we’re only selecting latitude and longitude and SqFtLot, because this is a proof of concept. class Regression: # __init__ def regress(self, query_point): distances, indexes = self.kdtree.query(query_point, self.k) m = self.metric(self.values.iloc[indexes]) if np.isnan(m): zomg else: return m
Here we are querying the KDTree to find the closest K houses. We then use the met‐ ric, in this case mean, to calculate a regression value. At this point we need to focus on the fact that, although all of this is great, we need some sort of test to make sure our data is working properly. 38
|
Chapter 3: K-Nearest Neighbors
KNN Testing Up until this point we’ve written a perfectly reasonable KNN regression tool to tell us house prices in King County. But how well does it actually perform? To do that we use something called cross-validation, which involves the following generalized algo‐ rithm: • Take a training set and split it into two categories: testing and training • Use the training data to train the model • Use the testing data to test how well the model performs. We can do that with the following code: class Regression: # __init__ # regress def error_rate(self, folds): holdout = 1 / float(folds) errors = [] for fold in range(folds): y_hat, y_true = self.__validation_data(holdout) errors.append(mean_absolute_error(y_true, y_hat)) return errors def __validation_data(self, holdout): test_rows = random.sample(self.df.index, int(round(len(self.df) * holdout))) train_rows = set(range(len(self.df))) - set(test_rows) df_test = self.df.ix[test_rows] df_train = self.df.drop(test_rows) test_values = self.values.ix[test_rows] train_values = self.values.ix[train_rows] kd = Regression(data=df_train, values=train_values) y_hat = [] y_actual = [] for idx, row in df_test.iterrows(): y_hat.append(kd.regress(row)) y_actual.append(self.values[idx]) return (y_hat, y_actual)
Folds are generally how many times you wish to split the data. So for instance if we had 3 folds we would hold 2/3 of the data for training and 1/3 for testing and iterate through the problem set 3 times (Figure 3-14).
Valuing Houses in Seattle
|
39
Figure 3-14. Split data into training and testing Now these datapoints are interesting, but how well does our model perform? To do that, let’s take a visual approach and write code that utilizes Pandas’ graphics and mat‐ plotlib. class Regression: # __init__ # regress # error_rate # __validation_data def plot_error_rates(self): folds = range(2, 11) errors = pd.DataFrame({'max': 0, 'min': 0}, index=folds) for f in folds: error_rates = r.error_rate(f) errors['max'][f] = max(error_rates) errors['min'][f] = min(error_rates) errors.plot(title='Mean Absolute Error of KNN over different folds') plt.show()
Running this yields the graph in Figure 3-15.
Figure 3-15. The error rates we achieved. The x-axis is the number of folds, and the yaxis is the absolute error in estimated home price (i.e., how much it’s off by).
40
| Chapter 3: K-Nearest Neighbors
As you can see, starting with folds of 2 we have a fairly tight absolute deviation of about $77,000 dollars. As we increase the folds and, as a result, reduce the testing sample, that increases to a range of $73,000 to $77,000. For a very simplistic model that contains all information from waterfront property to condos, this actually does quite well!
Conclusion While K-Nearest Neighbors is a simple algorithm, it yields quite good results. We have seen that for distance-based problems we can utilize KNN to great effect. We also learned about how you can use this algorithm for either a classification or regres‐ sion problem. We then analyzed the regression we built using a graphic representing the error. Next, we showed that KNN has a downside that is inherent in any distance-based metric: the curse of dimensionality. This curse is something we can overcome using feature transformations or selections. Overall it’s a great algorithm and it stands the test of time.
Conclusion
|
41
CHAPTER 4
Naive Bayesian Classification
Remember how email was several years ago? You probably recall your inbox being full of spam messages ranging from Nigerian princes wanting to pawn off money to pharmaceutical advertisements. It became such a major issue that we spent most of our time filtering spam. Nowadays we spend a lot less time filtering spam than we used to, thanks to Gmail and tools like SpamAssassin. Using a method called a Naive Bayesian Classifier, such tools have been able to mitigate the influx of spam to our inboxes. This chapter will explore that topic as well as: • Bayes’ theorem • What a Naive Bayesian Classifier is and why it’s called “naive” • How to build a spam filter using a Naive Bayesian Classifier As noted in Table 2-2, a Naive Bayes Classifier is a supervised and probabilistic learn‐ ing method. It does well with data in which the inputs are independent from one another. It also prefers problems where the probability of any attribute is greater than zero.
Using Bayes’ Theorem to Find Fraudulent Orders Imagine you’re running an online store and lately you’ve been overrun with fraudu‐ lent orders. You estimate that about 10% of all orders coming in are fraudulent. In other words, in 10% of orders, people are stealing from you. Now of course you want to mitigate this by reducing the fraudulent orders, but you are facing a conundrum. Every month you receive at least 1,000 orders, and if you were to check every single one, you’d spend more money fighting fraud than the fraud was costing you in the 43
first place. Assuming that it takes up to 60 seconds per order to determine whether it’s fraudulent or not, and a customer service representative costs around $15 per hour to hire, that totals 200 hours and $3,000 per year. Another way of approaching this problem would be to construct a probability that an order is over 50% fraudulent. In this case, we’d expect the number of orders we’d have to look at to be much lower. But this is where things become difficult, because the only thing we can determine is the probability that it’s fraudulent, which is 10%. Given that piece of information, we’d be back at square one looking at all orders because it’s more probable that an order is not fraudulent! Let’s say that we notice that fraudulent orders often use gift cards and multiple pro‐ motional codes. Using this knowledge, how would we determine what is fraudulent or not—namely, how would we calculate the probability of fraud given that the pur‐ chaser used a gift card? To answer for that, we first have to talk about conditional probabilities.
Conditional Probabilities Most people understand what we mean by the probability of something happening. For instance, the probability of an order being fraudulent is 10%. That’s pretty straightforward. But what about the probability of an order being fraudulent given that it used a gift card? To handle that more complicated case, we need something called a conditional probability, which is defined as follows: Equation 4-1. Conditional probability PA B =
P A∩B PB
Probability Symbols Generally speaking, writing P(E) means that you are looking at the probability of a given event. This event can be a lot of different things, including the event that A and B happened, the probability that A or B happened, or the probability of A given B happening in the past. Here we’ll cover how you’d notate each of these scenarios. A ∩ B is called the intersection function but could also be thought of as the Boolean operation AND. For instance, in Python it looks like this: a = [1,2,3] b = [1,4,5] set(a) & set(b) #=> [1]
44
|
Chapter 4: Naive Bayesian Classification
A ∪ B could be called the OR function, as it is both A and B. For instance, in Python it looks like the following: a = [1,2,3] b = [1,4,5] set(a) | set(b) #=> [1,2,3,4,5]
Finally, the probability of A given B looks as follows in Python: a = set([1,2,3]) b = set([1,4,5]) total = 6.0 p_a_cap_b = len(a & b) / total p_b = len(b) / total p_a_given_b = p_a_cap_b / p_b #=> 0.33
This definition basically says that the probability of A happening given that B hap‐ pened is the probability of A and B happening divided by the probability of B. Graph‐ ically, it looks something like Figure 4-1.
Figure 4-1. How conditional probabilities are made This shows how P(A | B) sits between P(A and B) and P(B). In our fraud example, let’s say we want to measure the probability of fraud given that an order used a gift card. This would be: P Fraud Gi f tcard =
P Fraud ∩ Gi f tcard P Gi f tcard
Now this works if you know the actual probability of Fraud and Giftcard. At this point, we are up against the problem that we cannot calculate P(Fraud|Gift‐ card) because that is hard to separate out. To solve this problem, we need to use a trick introduced by Bayes. Probability Symbols
|
45
Inverse Conditional Probability (aka Bayes’ Theorem) In the 1700s, Reverend Thomas Bayes came up with the original research that would become Bayes’ theorem. Pierre-Simon Laplace extended Bayes’ research to produce the beautiful result we know today. Bayes’ theorem is as follows: Equation 4-2. Bayes’ theorem PB∣A =
P A∣BPB PA
This is because of the following: Equation 4-3. Bayes’ theorem expanded PB∣A =
P A∩B P B PB PA
=
P A∩B PA
This is useful in our fraud example because we can effectively back out our result using other information. Using Bayes’ theorem, we would now calculate: P Fraud ∣ Gi f tcard =
P Gi f tcard ∣ Fraud P Fraud P Gi f tcard
Remember that the probability of fraud was 10%. Let’s say that the probability of gift card use is 10%, and based on our research the probability of gift card use in a frau‐ dulent order is 60%. So what is the probability that an order is fraudulent given that it uses a gift card? P Fraud ∣ Gi f tcard =
60 % · 10 % 10 %
= 60 %
The beauty of this is that your work on measuring fraudulent orders is drastically reduced because all you have to look for is the orders with gift cards. Because the total number of orders is 1,000, and 100 of those are fraudulent, we will look at 60 of those fraudulent orders. Out of the remaining 900, 90 used gift cards, which brings the total we need to look at to 150! At this point, you’ll notice we reduced the orders needing fraud review from 1,000 to 150 (i.e., 15% of the total). But can we do better? What about introducing something like people using multiple promo codes or other information?
46
|
Chapter 4: Naive Bayesian Classification
Naive Bayesian Classifier We’ve already solved the problem of finding fraudulent orders given that a gift card was used, but what about the problem of fraudulent orders given the fact that they have gift cards, or multiple promo codes, or other features? How would we go about that? Namely, we want to solve the problem of P A ∣ B, C = ?. For this, we need a bit more information and something called the chain rule.
The Chain Rule If you think back to probability class, you might recall that the probability of A and B happening is the probability of B given A times the probability of A. Mathematically, this looks like P A ∩ B = P B A P A . This is assuming these events are not mutu‐ ally exclusive. Using something called a joint probability, this smaller result trans‐ forms into the chain rule. Joint probabilities are the probability that all the events will happen. We denote this by using ∩. The generic case of the chain rule is: Equation 4-4. Chain rule P A1, A2, ⋯, An = P A1 P A2 ∣ A1 P A3 ∣ A1, A2 ⋯P An A1, A2, ⋯, An − 1
This expanded version is useful in trying to solve our problem by feeding lots of information into our Bayesian probability estimates. But there is one problem: this can quickly evolve into a complex calculation using information we don’t have, so we make one big assumption and act naive.
Naiveté in Bayesian Reasoning The chain rule is useful for solving potentially inclusive problems, but we don’t have the ability to calculate all of those probabilities. For instance, if we were to introduce multiple promos into our fraud example, then we’d have the following to calculate: P Fraud ∣ Gi f tcard, Promos =
P Gi f tcard, Promos ∣ Fraud P Fraud P Gi f tcard, Promos
Let’s ignore the denominator for now, as it doesn’t depend on whether the order is fraudulent or not. At this point, we need to focus on finding the calculation for P(Giftcard, Promos|Fraud)P(Fraud). If we apply the chain rule, this is equivalent to P(Fraud, Giftcard, Promos).
Naive Bayesian Classifier
|
47
You can see this by the following (note that Fraud, Giftcard, and Promo have been abbreviated for space): P F, G, P = P F P G, P F P F P G, P F = P F P G F P P F, G
Now at this point we have a conundrum: how do you measure the probability of a promo code given fraud and gift cards? While this is the correct probability, it really can be difficult to measure—especially with more features coming in. What if we were to be a tad naive and assume that we can get away with independence and just say that we don’t care about the interaction between promo codes and gift cards, just the interaction of each independently with fraud? In that case, our math would be much simpler: P Fraud, Gi f tcard, Promo = P Fraud P Gi f tcard ∣ Fraud P Promo ∣ Fraud
This would be proportional to our numerator. And, to simplify things even more, we can assert that we’ll normalize later with some magical Z, which is the sum of all the probabilities of classes. So now our model becomes: 1
P Fraud ∣ Gi f tcard, Promo = Z P Fraud P Gi f tcard ∣ Fraud P Promo ∣ Fraud
To turn this into a classification problem, we simply determine which input—fraud or not fraud—yields the highest probability. See Table 4-1. Table 4-1. Probability of gift cards versus promos Gift card present
Fraud Not fraud 60% 30%
Multiple promos used 50%
30%
Probability of class
90%
10%
At this point, you can use this information to determine whether an order is fraudu‐ lent based purely on whether it has a gift card present and whether it used multiple promos. The probability that an order is fraudulent given the use of gift cards and multiple promos is 62.5%. While we can’t exactly figure out how much savings this gives you in terms of the number of orders you must review, we know that we’re using better information and making a better judgment.
48
|
Chapter 4: Naive Bayesian Classification
There is one problem, though: what happens when the probability of using multiple promos given a fraudulent order is zero? A zero result can happen for several reasons, including that there just isn’t enough of a sample size. The way we solve this is by using something called a pseudocount.
Pseudocount There is one big challenge with a Naive Bayesian Classifier, and that is the introduc‐ tion of new information. For instance, let’s say we have a bunch of emails that are classified as spam or ham. We build our probabilities using all of this data, but then something bad happens: a new spammy word, fuzzbolt. Nowhere in our data did we see the word fuzzbolt, and so when we calculate the probability of spam given the word fuzzbolt, we get a probability of zero. This can have a zeroing-out effect that will greatly skew results toward the data we have. Because a Naive Bayesian Classifier relies on multiplying all of the independent prob‐ abilities together to come up with a classification, if any of those probabilities are zero then our probability will be zero. Take, for instance, the email subject “Fuzzbolt: Prince of Nigeria.” Assuming we strip off of, we have the data shown in Table 4-2. Table 4-2. Probability of word given ham or spam Word Spam Ham Fuzzbolt 0 0 Prince
75%
15%
Nigeria
85%
10%
Now let’s assume we want to calculate a score for ham or spam. In both cases, the score would end up being zero because fuzzbolt isn’t present. At that point, because we have a tie, we’d just go with the more common situation, which is ham. This means that we have failed and classified something incorrectly due to one word not being recognized. There is an easy fix for that: pseudocount. When we go about calculating the proba‐ bility, we add one to the count of the word. So, in other words, everything will end up being word_count + 1. This helps mitigate the zeroing-out effect for now. In the case of our fraud detector, we would add one to each count to ensure that it is never zero. So in our preceding example, let’s say we have 3,000 words. We would give fuzzbolt a 1 score of 3000 . The other scores would change slightly, but this avoids the zeroing-out problem.
Pseudocount
|
49
Spam Filter The canonical machine learning example is building a spam filter. In this section, we will work up a simple spam filter, SpamTrainer, using a Naive Bayesian Classifier and improve it by utilizing a 3-gram tokenization model. As you have learned before, Naive Bayesian Classifiers can be easily calculated, and operate well under strongly independent conditions. In this example, we will cover the following: • What the classes look like interacting with each other • A good data source • A tokenization model • An objective to minimize our error • A way to improve over time
Setup Notes Python is constantly changing and I have tried to keep the examples working under both 2.7.x and 3.0.x Python. That being said, things might change as Python changes. For more comprehensive information check out the GitHub repo.
Coding and Testing Design In our example, each email has an object that takes an .eml type text file that then tokenizes it into something the SpamTrainer can utilize for incoming email messages. See Figure 4-2 for the class diagram.
Figure 4-2. Class diagram showing how emails get turned into a SpamTrainer
50
|
Chapter 4: Naive Bayesian Classification
When it comes to testing we will focus on the tradeoff between false positives and false negatives. With spam detection it becomes important to realize that a false posi‐ tive (classifying an email as spam when it isn’t) could actually be very bad for busi‐ ness. We will focus on minimizing the false positive rate but similar results could be applied to minimizing false negatives or having them equal each other.
Data Source There are numerous sources of data that we can use, but the best is raw email mes‐ sages marked as either spam or ham. For our purposes, we can use the CSDMC2010 SPAM corpus. This data set has 4,327 total messages, of which 2,949 are ham and 1,378 are spam. For our proof of concept, this should work well enough.
Email Class The Email class has one responsibility, which is to parse an incoming email message according to the RFC for emails. To handle this, we use the mail gem because there’s a lot of nuance in there. In our model, all we’re concerned with is subject and body. The cases we need to handle are HTML messages, plaintext, and multipart. Every‐ thing else we’ll just ignore. Building this class using test-driven development, let’s go through this step by step. Starting with the simple plaintext case, we’ll copy one of the example training files from our data set under data/TRAINING/TRAIN_00001.eml to ./test/fixtures/ plain.eml. This is a plaintext email and will work for our purposes. Note that the split between a message and header in an email is usually denoted by “\r\n\r\n”. Along with that header information is generally something like “Subject: A Subject goes here.” Using that, we can easily extract our test case, which is: import unittest import io import re from naive_bayes.email_object import EmailObject class TestPlaintextEmailObject(unittest.TestCase): CLRF = "\n\n" def setUp(self): self.plain_file = './tests/fixtures/plain.eml' self.plaintext = io.open(self.plain_file, 'r') self.text = self.plaintext.read() self.plaintext.seek(0) self.plain_email = EmailObject(self.plaintext) def test_parse_plain_body(self): body = self.CLRF.join(self.text.split(self.CLRF)[1:])
Spam Filter
|
51
self.assertEqual(self.plain_email.body(), body) def test_parses_the_subject(self): subject = re.search("Subject: (.*)", self.text).group(1) self.assertEqual(self.plain_email.subject(), subject)
Unit Testing in Python Up until this point we haven’t introduced the unittest package in Python. Its main objective is to define unit tests for us to run on our code. Like similar unit testing frameworks in other languages like Ruby, we build a class that is prefixed with “Test” and then implement specific methods. Methods to implement: • Any method that is prefixed with test_ will be treated as a test to be run. • setUp(self) is a special method that gets run before any test gets run. Think of this like a block of code that gets run before all tests (Table 4-3). Table 4-3. Python unittest has many assertions we can use Method assertEqual(a, b) assertNotEqual(a, b) assertTrue(x) assertFalse(x) assertIs(a,b) assertIsNot(a,b) assertIsNone(x) assertIsNotNone(x) assertIn(a,b) assertNotIn(a,b) assertIsInstance(a,b) assertNotIsInstance(a,b)
Checks a == b a != b bool(x) is True bool(x) is False a is b a is not b x is None x is not None a in b a not in b isinstance(a,b) not isinstance(a,b)
Do note that we will not use all of these methods; they are listed here for future reference.
Now instead of relying purely on regular expressions, we want to utilize a gem. We’ll use the Stdlib of Python, which will handle all of the nitty-gritty details. Making email work for this particular case, we have: 52
|
Chapter 4: Naive Bayesian Classification
import email from BeautifulSoup import BeautifulSoup class EmailObject: def __init__(self, filepath, category = None): self.filepath = filepath self.category = category self.mail = email.message_from_file(self.filepath) def subject(self): return self.mail.get('Subject') def body(self): return self.mail.get_payload(decode=True)
BeautifulSoup is a library that parses HTML and XML.
Now that we have captured the case of plaintext, we need to solve the case of HTML. For that, we want to capture only the inner_text. But first we need a test case, which looks something like this: import unittest import io import re from BeautifulSoup import BeautifulSoup from naive_bayes.email_object import EmailObject class TestHTMLEmail(unittest.TestCase): def setUp(self): self.html_file = io.open('./tests/fixtures/html.eml', 'rb') self.html = self.html_file.read() self.html_file.seek(0) self.html_email = EmailObject(self.html_file) def test_parses_stores_inner_text_html(self): body = "\n\n".join(self.html.split("\n\n")[1:]) expected = BeautifulSoup(body).text self.assertEqual(self.html_email.body(), expected) def test_stores_subject(self): subject = re.search("Subject: (.*)", self.html).group(1) self.assertEqual(self.html_email.subject(), subject)
Spam Filter
|
53
As mentioned, we’re using BeautifulSoup to calculate the inner_text, and we’ll have to use it inside of the Email class as well. Now the problem is that we also need to detect the content_type. So we’ll add that in: import email from BeautifulSoup import BeautifulSoup class EmailObject: def __init__(self, filepath, category = None): self.filepath = filepath self.category = category self.mail = email.message_from_file(self.filepath) def subject(self): return self.mail.get('Subject') def body(self): content_type = part.get_content_type() body = part.get_payload(decode=True) if content_type == 'text/html': return BeautifulSoup(body).text elif content_type == 'text/plain': return body else: return ''
At this point, we could add multipart processing as well, but I will leave that as an exercise that you can try out yourself. In the coding repository mentioned earlier in the chapter, you can see the multipart version. Now we have a working email parser, but we still have to deal with tokenization, or what to extract from the body and subject.
Tokenization and Context As Figure 4-3 shows, there are numerous ways to tokenize text, such as by stems, word frequencies, and words. In the case of spam, we are up against a tough problem because things are more contextual. The phrase Buy now sounds spammy, whereas Buy and now do not. Because we are building a Naive Bayesian Classifier, we are assuming that each individual token is contributing to the spamminess of the email.
54
| Chapter 4: Naive Bayesian Classification
Figure 4-3. Lots of ways to tokenize text The goal of the tokenizer we’ll build is to extract words into a stream. Instead of returning an array, we want to yield the token as it happens so that we are keeping a low memory profile. Our tokenizer should also downcase all strings to keep them similar: import unittest from naive_bayes.tokenizer import Tokenizer class TestTokenizer(unittest.TestCase): def setUp(self): self.string = "this is a test of the emergency broadcasting system" def test_downcasing(self): expectation = ["this", "is", "all", "caps"] actual = Tokenizer.tokenize("THIS IS ALL CAPS") self.assertEqual(actual, expectation) def test_ngrams(self): expectation = [ [u'\u0000', "quick"], ["quick", "brown"], ["brown", "fox"], ] actual = Tokenizer.ngram("quick brown fox", 2) self.assertEqual(actual, expectation)
As promised, we do two things in this tokenizer code. First, we lowercase all words. Second, instead of returning an array, we use a block. This is to mitigate memory constraints, as there is no need to build an array and return it. This makes it lazier. To
Spam Filter
|
55
make the subsequent tests work, though, we will have to fill in the skeleton for our tokenizer module like so: import re class Tokenizer: NULL = u'\u0000' @staticmethod def tokenize(string): return re.findall("\w+", string.lower()) @staticmethod def ngram(string, ngram): tokens = Tokenizer.tokenize(string) ngrams = [] for i in range(len(tokens)): shift = i-ngram+1 padding = max(-shift,0) first_idx = max(shift, 0) last_idx = first_idx + ngram - padding ngrams.append(Tokenizer.pad(tokens[first_idx:last_idx], padding)) return ngrams @staticmethod def pad(tokens, padding): padded_tokens = [] for i in range(padding): padded_tokens.append(Tokenizer.NULL) return padded_tokens + tokens
Now that we have a way of parsing and tokenizing emails, we can move on to build the Bayesian portion: the SpamTrainer.
SpamTrainer The SpamTrainer will accomplish three things: • Storing training data • Building a Bayesian classifier • Error minimization through cross-validation
56
|
Chapter 4: Naive Bayesian Classification
Storing training data The first step we need to tackle is to store training data from a given set of email mes‐ sages. In a production environment, you would pick something that has persistence. In our case, we will go with storing everything in one big dictionary. A set is a unique collection of data.
Remember that most machine learning algorithms have two steps: training and then computation. Our training step will consist of these substeps: • Storing a set of all categories • Storing unique word counts for each category • Storing the totals for each category So first we need to capture all of the category names; that test would look something like this: import unittest import io import sets from naive_bayes.email_object import EmailObject from naive_bayes.spam_trainer import SpamTrainer class TestSpamTrainer(unittest.TestCase): def setUp(self): self.training = [['spam', './tests/fixtures/plain.eml'], \ ['ham', './tests/fixtures/small.eml'], \ ['scram', './tests/fixtures/plain.eml']] self.trainer = SpamTrainer(self.training) file = io.open('./tests/fixtures/plain.eml', 'r') self.email = EmailObject(file) def test_multiple_categories(self): categories = self.trainer.categories expected = sets.Set([k for k,v in self.training]) self.assertEqual(categories, expected)
The solution is in the following code: from sets import Set import io from tokenizer import Tokenizer from email_object import EmailObject from collections import defaultdict
Spam Filter
|
57
class SpamTrainer: def __init__(self, training_files): self.categories = Set() for category, file in training_files: self.categories.add(category) self.totals = defaultdict(float) self.training = {c: defaultdict(float) for c in self.categories} self.to_train = training_files def total_for(self, category): return self.totals[category]
You’ll notice we’re just using a set to capture this for now, as it’ll hold on to the unique version of what we need. Our next step is to capture the unique tokens for each email. We are using the special category called _all to capture the count for everything: class TestSpamTrainer(unittest.TestCase): # setUp # test_multiple_categories def test_counts_all_at_zero(self): for cat in ['_all', 'spam', 'ham', 'scram']: self.assertEqual(self.trainer.total_for(cat), 0)
To get this to work, we have introduced a new method called train(), which will take the training data, iterate over it, and save it into an internal hash. The following is a solution: class SpamTrainer: # __init__ # total_for def train(self): for category, file in self.to_train: email = EmailObject(io.open(file, 'rb')) self.categories.add(category) for token in Tokenizer.unique_tokenizer(email.body()): self.training[category][token] += 1 self.totals['_all'] += 1 self.totals[category] += 1 self.to_train = {}
Now we have taken care of the training aspect of our program but really have no clue how well it performs. And it doesn’t classify anything. For that, we still need to build our classifier. 58
|
Chapter 4: Naive Bayesian Classification
Building the Bayesian classifier To refresh your memory, Bayes’ theorem is: P Ai B =
P B ∣ Ai P Ai ∑jP B ∣ Aj P Aj
But because we’re being naive about this, we’ve distilled it into something much simpler: Equation 4-5. Bayesian spam score Score Spam, W 1, W 2, ⋯, W n = P Spam P W 1 ∣ Spam P W 2 ∣ Spam ⋯P W n ∣ Spam
which is then divided by some normalizing constant, Z. Our goal now is to build the methods score, normalized_score, and classify. The score method will just be the raw score from the preceding calculation, while normal ized_score will fit the range from 0 to 1 (we get this by dividing by the total sum, Z). The score method’s test is as follows: class TestSpamTrainer(unittest.TestCase): # setUp # test_multiple_categories # test_counts_all_at_zero def test_probability_being_1_over_n(self): trainer = self.trainer scores = trainer.score(self.email).values() self.assertAlmostEqual(scores[0], scores[-1]) for i in range(len(scores)-1): self.assertAlmostEqual(scores[i], scores[i+1])
Because the training data is uniform across the categories, there is no reason for the score to differ across them. To make this work in our SpamTrainer object, we will have to fill in the pieces like so: class SpamTrainer: # __init__ # total_for # train def score(self, email): self.train()
Spam Filter
|
59
cat_totals = self.totals aggregates = {cat: cat_totals[c]/cat_totals['_all'] for c in self.categories] for token in Tokenizer.unique_tokenizer(email.body()): for cat in self.categories: value = self.training[cat][token] r = (value+1)/(cat_totals[cat]+1) aggregates[cat] *= r return aggregates
This test does the following: • First, it trains the model if it’s not already trained (the train method handles this). • For each token of the blob of an email we iterate through all categories and calcu‐ late the probability of that token being within that category. This calculates the Naive Bayesian score of each without dividing by Z. Now that we have score figured out, we need to build a normalized_score that adds up to 1. Testing for this, we have: class TestSpamTrainer(unittest.TestCase): # setUp # test_multiple_categories # test_counts_all_at_zero # test_probability_being_1_over_n def test_adds_up_to_one(self): trainer = self.trainer scores = trainer.normalized_score(self.email).values() self.assertAlmostEqual(sum(scores), 1) self.assertAlmostEqual(scores[0], 1/2.0)
And subsequently on the SpamTrainer class we have: class SpamTrainer: # __init__ # total_for # train # score def normalized_score(self, email): score = self.score(email) scoresum = sum(score.values()) normalized = {cat: (agg/scoresum) for cat, agg in score.iteritems()} return normalized
60
|
Chapter 4: Naive Bayesian Classification
Calculating a classification Because we now have a score, we need to calculate a classification for the end user to use. This classification should take the form of an object that returns guess and score. There is an issue of tie breaking here. Let’s say, for instance, we have a model that has turkey and tofu. What happens when the scores come back evenly split? Probably the best course of action is to go with which is more popular, whether it be turkey or tofu. What about the case where the probability is the same? In that case, we can just go with alphabetical order. When testing for this, we need to introduce a preference order—that is, the occur‐ rence of each category. A test for this would be: class TestSpamTrainer(unittest.TestCase): # setUp # test_multiple_categories # test_counts_all_at_zero # test_probability_being_1_over_n # test_adds_up_to_one def test_preference_category(self): trainer = self.trainer expected = sorted(trainer.categories, key=lambda cat: trainer.total_for(cat)) self.assertEqual(trainer.preference(), expected)
Getting this to work is trivial and would look like this: class SpamTrainer: # __init__ # total_for # train # score # normalized_score def preference(self): return sorted(self.categories, key=lambda cat: self.total_for(cat))
Now that we have preference set up, we can test for our classification being correct. The code to do that is as follows: class TestSpamTrainer(unittest.TestCase): # setUp # test_multiple_categories # test_counts_all_at_zero # test_probability_being_1_over_n # test_adds_up_to_one # test_preference_category def test_give_preference_to_whatever_has_the_most(self): trainer = self.trainer score = trainer.score(self.email)
Spam Filter
|
61
preference = trainer.preference()[-1] preference_score = score[preference] expected = SpamTrainer.Classification(preference, preference_score) self.assertEqual(trainer.classify(self.email), expected)
Getting this to work in code again is simple: class SpamTrainer: # __init__ # total_for # train # score # normalized_score # preference class Classification: def __init__(self, guess, score): self.guess = guess self.score = score def __eq__(self, other): return self.guess == other.guess and self.score == other.score def classify(self, email): score = self.score(email) max_score = 0.0 preference = self.preference() max_key = preference[-1] for k,v in score.iteritems(): if v > max_score: max_key = k max_score = v elif v == max_score and preference.index(k) > preference.index(max_key): max_key = k max_score = v return self.Classification(max_key, max_score)
Error Minimization Through Cross-Validation At this point, we need to measure how well our model works. To do so, we need to take the data that we downloaded earlier and do a cross-validation test on it. From there, we need to measure only false positives, and then based on that determine whether we need to fine-tune our model more.
Minimizing false positives Up until this point, our goal with making models has been to minimize error. This error could be easily denoted as the count of misclassifications divided by the total
62
|
Chapter 4: Naive Bayesian Classification
classifications. In most cases, this is exactly what we want, but in a spam filter this isn’t what we’re optimizing for. Instead, we want to minimize false positives. False positives, also known as Type I errors, are when the model incorrectly predicts a posi‐ tive when it should have been negative. In our case, if our model predicts spam when in fact the email isn’t, then the user will lose her emails. We want our spam filter to have as few false positives as possible. On the other hand, if our model incorrectly predicts something as ham when it isn’t, we don’t care as much. Instead of minimizing the total misclassifications divided by total classifications, we want to minimize spam misclassifications divided by total classifications. We will also measure false negatives, but they are less important because we are trying to reduce spam that enters someone’s mailbox, not eliminate it. To accomplish this, we first need to take some information from our data set, which we’ll cover next.
Building the two folds Inside the spam email training data is a file called keyfile.label. It contains information about whether the file is spam or ham. Using that, we can build a cross-validation script. First let’s start with setup, which involves importing the packages we’ve worked on and some IO and regular expression libraries: from spam_trainer import SpamTrainer from email_object import EmailObject import io import re print "Cross Validation" correct = 0 false_positives = 0.0 false_negatives = 0.0 confidence = 0.0
This doesn’t do much yet except start with a zeroed counter for correct, false posi‐ tives, false negatives, and confidence. To set up the test we need to load the label data and turn that into a SpamTrainer object. We can do that using the following: def label_to_training_data(fold_file): training_data = [] for line in io.open(fold_file, 'rb'): label_file = line.rstrip().split(' ') training_data.append(label_file) print training_data return SpamTrainer(training_data)
Spam Filter
|
63
trainer = label_to_training_data('./tests/fixtures/fold1.label')
This instantiates a trainer object by calling the label_to_training_data function. Next we parse the emails we have in fold number 2: def parse_emails(keyfile): emails = [] print "Parsing emails for " + keyfile for line in io.open(keyfile, 'rb'): label, file = line.rstrip().split(' ') emails.append(EmailObject(io.open(file, 'rb'), category=label)) print "Done parsing files for " + keyfile return emails emails = parse_emails('./tests/fixtures/fold2.label')
Now we have a trainer object and emails parsed. All we need to do now is calculate the accuracy and validation metrics: def validate(trainer, set_of_emails): correct = 0 false_positives = 0.0 false_negatives = 0.0 confidence = 0.0 for email in set_of_emails: classification = trainer.classify(email) confidence += classification.score if classification.guess == 'spam' and email.category == 'ham': false_positives += 1 elif classification.guess == 'ham' and email.category == 'spam': false_negatives += 1 else: correct += 1 total = false_positives + false_negatives + correct false_positive_rate = false_positives/total false_negative_rate = false_negatives/total accuracy = (false_positives + false_negatives) / total message = """ False Positives: {0} False Negatives: {1} Accuracy: {2} """.format(false_positive_rate, false_negative_rate, accuracy) print message validate(trainer, emails)
64
| Chapter 4: Naive Bayesian Classification
Last, we can analyze the other direction of the cross-validation (i.e., validating fold1 against a fold2 trained model): trainer = label_to_training_data('./tests/fixtures/fold2.label') emails = parse_emails('./tests/fixtures/fold1.label') validate(trainer, emails)
Cross-validation and error measuring From here, we can actually build our cross-validation test, which will read fold1 and fold2 and then cross-validate to determine the actual error rate. The test looks some‐ thing like this (see Table 4-4 for the results): Cross Validation::Fold1 unigram model validates fold1 against fold2 with a unigram model False Positives: 0.0036985668053629217 False Negatives: 0.16458622283865001 Error Rate: 0.16828478964401294 Cross Validation::Fold2 unigram model validates fold2 against fold1 with a unigram model False Positives: 0.005545286506469501 False Negatives: 0.17375231053604437 Error Rate: 0.17929759704251386
Table 4-4. Spam versus ham Category Email count Word count Probability of email Probability of word Spam 1,378 231,472 31.8% 36.3% Ham
2,949
406,984
68.2%
63.7%
Total
4,327
638,456
100%
100%
As you can see, ham is more probable, so we will default to that and more often than not we’ll classify something as ham when it might not be. The good thing here, though, is that we have reduced spam by 80% without sacrificing incoming messages.
Conclusion In this chapter, we have delved into building and understanding a Naive Bayesian Classifier. As you have learned, this algorithm is well suited for data that can be asser‐ ted to be independent. Being a probablistic model, it works well for classifying data into multiple directions given the underlying score. This supervised learning method is useful for fraud detection, spam filtering, and any other problem that has these types of features.
Conclusion
|
65
CHAPTER 5
Decision Trees and Random Forests
Every day we make decisions. Every second we make decisions. Our perceptive brains receive roughly 12.5 gigabytes to 2.5 terabytes of information per second—an impres‐ sive amount—but they only focus on 60 bits of information per second.1 Humans are exceptionally adept at taking in lots of data and quickly finding patterns in it. But we’re not so great under pressure. In Chapter 1 we discussed flight and how checklists have solved many of its problems. We don’t use checklists because we’re stupid; on the contrary, it’s because under stress we forget small bits of information. What if the effects of our decisions were even greater? Take, for instance, classifying mushrooms in the forest. If you are lucky enough to live in a climate that supports mushrooms such as morels, which are delicious, then you can see the allure of going to find your own, as they are quite expensive! But as we all know finding mushrooms in the forest is extremely dangerous if you misclassify them. While death by mushroom is quite rare, the effects are well documented. Death caps, Amanita phalloides, cause liver failure. On the other hand, if you were to eat a Psilo‐ cybe semilanceata by accident, you would be in for a trip. Also known as liberty caps, these are the notorious magic mushrooms that people ingest to experience psyche‐ delic effects! While we as humans are good at processing information, we might not be the best at making decisions that are objective. When our decisions can produce outcomes as
1 “New Measure of Human Brain Processing Speed,” MIT Technology Review, August 25, 2009,-
measure-brain; James Randerson, “How Many Neurons Make a Human Brain? Billions Fewer Than We Thought,” The Guardian, February 28, 2012,; AI Impacts, “Neuron Firing Rates in Humans,” April 4, 2014,.
67
varied as a psychedelic high, death, or delicious food, we need to apply an algorithm. For that we’ll use decision trees. This should never be used to find mushrooms in the woods. Don’t do it, ever, unless you are a trained mycologist or traveling with an experienced mycologist. You risk a terrible death if you do so. This is purely educational and should only be used to see the nuances between mushrooms as well as understand our fungus friends better.
In this chapter we will first talk about the nuances of mushrooms and how we will go about defining success with our model. We’ll first take a fairly folk-theorem approach to classifying mushrooms using some of the normal classification techniques to determine whether something is poisonous or not. Then we’ll move into splitting our data into pieces based on the attributes of mushrooms. We’ll also talk about the ill effects of a deep tree. Last, we’ll discuss finding trees using an ensemble method called random forests. Throughout this chapter we will talk about how to approach this problem with a test‐ ing focus.
The Nuances of Mushrooms As with any machine learning problem, knowing your domain is important. This domain knowledge can make or break many models and mushrooms are no differ‐ ent. These amazing fungi are hard to classify because similar-looking species can have vastly different effects when eaten. For instance in the Boletus genus there are Boletus badius (bay bolete), Boletus pulcherrimus, and Boletus manicus (see Figure 5-1).
Figure 5-1. Boletus badius and Boletus pulcherrimus. Source: Wikimedia.
68
|
Chapter 5: Decision Trees and Random Forests
The bay bolete is edible and delicious, Boletus pulcherrimus is poisonous, and Boletus manicus is psychedelic. They all look fairly similar since they are big fat mushrooms that share a common top. If our goal was to find bolete-like mushrooms then we might find ourself in extreme danger. So what should we do? People have been classifying mushrooms for a long time and this has led to folk traditions.
Classifying Mushrooms Using a Folk Theorem Folk traditions have led to a few heuristics we could use to build a model to help us classify mushrooms: • Poisonous mushrooms are brightly colored. • Insects and animals will avoid contact with poisonous mushrooms. • Poisonous mushrooms will turn rice red if boiled. • Poisonous mushrooms have a pointed cap. Edible mushrooms have a flat cap. • Poisonous mushrooms taste bad. • Boletes are safe to eat.2 All of these are erroneous and unfortunately have killed people who followed some of them. But we might be able to increase the accuracy of this list by bringing them all into one unified theorem or folk theorem. Say, for instance, we first asked whether the mushroom was brightly colored or not, then asked whether insects or animals will avoid mushrooms, and so forth. We could keep asking questions until we find ourselves with a generalized answer. Visually we could represent this as a flow chart (Figure 5-2). But this is a loose way of building a model and suffers from the fact that we don’t have data. As a matter of fact, I wouldn’t condone collecting information on whether the mushroom tasted bad—that just seems dangerous. So what can we do instead?
2 For more information on mushrooms see Ian Robert Hall (2003), Edible and Poisonous Mushrooms of the
World (Timber Press), p. 103, ISBN 0-88192-586-1.
Classifying Mushrooms Using a Folk Theorem
|
69
Figure 5-2. Folk theorem flow chart
Finding an Optimal Switch Point Instead of focusing on domain knowledge, we could take a step back and focus on hard data. There is a data set that the University of California, Irvine owns about edi‐ ble and poisonous mushrooms. Sorry, there’s no information on psychedelic mush‐ rooms in this data, but we can still use the data to come up with a better approach than the folk theorem. The data contains quite a few attributes that might help us determine whether a mushroom is edible or not, such as cap shape, odor, and veil color. Instead of relying on folktales about what makes a mushroom poisonous or not, what does the data say? We could, for instance, come up with a probability of each feature adding independ‐ ently to the overall poisonousness of a mushroom. This would be a Naive Bayesian Classification but has the problem that each feature isn’t independent. There is cross‐
70
|
Chapter 5: Decision Trees and Random Forests
over. Maybe a mushroom that is bright and flat is okay to eat while a mushroom that is bright and round isn’t. Instead, to build this decision tree we can take the overall algorithm: • Split data into subcategories using the most informational attribute. • Keep going until threshold. This algorithm is quite simple. The idea is to take the entire population of mush‐ rooms, and split them into subcategories until we have a tree showing us how to clas‐ sify something (Figure 5-3).
Figure 5-3. Splitting data using categories Three common metrics are used to split data into subcategories: 1. Information gain 2. GINI impurity 3. Variance reduction
Information Gain Knowing that our goal is to split a population of mushrooms into subcategories, we would want to split on attributes that improve our model. We want to take an attribute such as odor and determine how that affects the classification accuracy. This can be done using information gain. Conceptually this is a metric of information theory and tells us how well the attribute tracks with the overall goal. It can be calculated as Gain = Hnew – Hprevious = H(T) – H(T | a). This tells us the relative information entropy gain in positive terms. So for instance if the previous entropy was –2 and the new entropy is –1 then we would have a gain of 1.
Finding an Optimal Switch Point
|
71
Information theory primer: entropy is used as a way of determin‐ ing just how descriptive bits are. A canonical example of entropy would be that if it’s always sunny in Death Valley with a probability of 100% then the entropy would be 0 to send information about what the weather of the day was. The information doesn’t need to be encoded since there’s nothing to report. Another example of high entropy would be having a complex pass‐ word. The more numerous and diverse the characters you use, the higher the entropy. The same is true of attributes. If we have lots of possibilities for mushroom odor, then that would have higher entropy.
GINI Impurity Not to be confused with the GINI coefficient, GINI impurity is a probabilistic meas‐ ure. It defines how probable an attribute is at showing up and the probability of it being mistaken. The formula for impurity is: m I G f = ∑m i = 1 p f i 1 − p f i = 1 − ∑i = 1 p f i
2
To understand this, examine a person’s simple taste profile (Table 5-1). Table 5-1. Taste profile Like/Not Like Sweet Salty Like True True Like
True
False
Like
False
True
Not Like
False
False
Not Like
False
True
Measuring what the GINI impurity is for the factors “Sweet” and “Salty” would be calculated this way: sum the probability of that factor in a given class (Like/Not Like) over each factor (Sweet/Salty). For instance with “Sweet”: I G Sweet =
72
|
2 3
1−
2 3
+
0 2
1−
2 2
=
2 9
Chapter 5: Decision Trees and Random Forests
Similarly: I G Salty =
2 3
1−
2 3
+
1 2
1−
1 2
=
2 9
+
1 4
=
17 36
What this means is that the GINI impurity for Salty is higher than the GINI impurity for Sweet. Intuitively while creating a decision tree we would want to choose Sweet as a split point first, since it will create less impurity in the tree.
Variance Reduction Variance reduction is used primarily in continuous decision trees. Conceptually var‐ iance reduction aims to reduce the dispersion of the classification. While it doesn’t apply to classification problems such as whether mushrooms are edible or not, it does apply to continuous outputs. If, for instance, we would rather have a model that pre‐ dicts in a predictable fashion: ξ = E X1 j − E X2 j = μ1 − μ2
Decision trees are wonderful but have a major drawback: sometimes they overfit data. As we will see in many chapters in this book, overfitting is a big problem. We want to model our problem without memorizing the data. To solve this we need to figure out a way of making our model general. We do that through pruning.
Pruning Trees When building a model to understand the nuances of mushrooms, we don’t want to just memorize everything. While I highly suggest memorizing all the data for perfect classifications if you do forage for mushrooms, in this case we need to focus on the model of the mushroom kingdom. To build a better model we can use pruning. Decision trees are NP in that they are generally hard to compute but easy to verify. This is even more amplified by the problem of complex trees. Our goal for pruning is to find a subtree of the full decision tree using the preceding decision points that minimizes this error surface: err prune T, t , S − err T, S leaves T − leaves prune T, t
Finding the optimal pruned tree is difficult because we need to run through all possi‐ bilities to find out which subtree is the best. To overcome this we need to rethink our original approach for training the decision tree in the first place. For that we’ll utilize an ensemble method called random forests. Pruning Trees
|
73
Ensemble Learning We haven’t discussed ensemble learning yet but it’s a highly valuable tool in any machine learning programmer’s toolkit. Ensemble methods are like metaprogramming for machine learning. The basic idea is to build a model full of many submodels. With our mushroom classification model we have discovered that we can find a solu‐ tion but it takes a long time to run. We want to find the optimal subtree that can also be thought of as a decision tree that satisfies our model. There are lots of ensemble methods, but for this chapter we will focus on bagging and random forests.
Bagging One simple approach to ensemble learning is bagging, which is short for “bootstrap aggregation.” This method was invented as a way to improve a model without chang‐ ing anything except the training set. It does this by aggregating multiple random ver‐ sions of the training set. Imagine that we randomly sample data in our data set. For instance, we take a subset that overlooks something like the poisonous Bolete mushroom. For each one of these subsamples we train a decision tree using our metrics like the GINI impurity or infor‐ mation gain (Figure 5-4). For each of these models we then have an accuracy, preci‐ sion, and recall associated with each. Accuracy, precision, and recall are all metrics to determine how viable the model is that we have built. Many of us already know accuracy but haven’t ran into precision or recall yet. The definitions of each of these metrics are: • Precision = True Positives / (True Positives + False Positives) • Recall = True Positives / (True Positives + False Negatives) • Accuracy = (True Positives + True Negatives) / (Number of all responses) Precision is a measure of how on point the classification is. For instance, out of all the positive matches the model finds, how many of them were correct? Recall can be thought of as the sensitivity of the model. It is a measure of whether all the relevant instances were actually looked at. Accuracy as we know it is simply an error rate of the model. How well does it do in aggregate?
74
|
Chapter 5: Decision Trees and Random Forests
Figure 5-4. Many trees make up a random forest We now have a set of decision trees that we can aggregate by finding the majority vote of a classification (Figure 5-5).
Figure 5-5. Voting usually is about picking the winner This drastically improves performance because it reduces variability in the prediction but doesn’t have a bunch of bias mixed in. What this implicitly means is that one decision tree will have a lot of noise contained within it, whereas decision trees in aggregate average out. This is similar to the central limit theorem.
Random forests Another method to aggregating tree models together is to randomly select feature spaces. For instance in our mushroom kingdom we can build decision trees using five subfeatures at each iteration. This would yield at most 26,334 combinations (22 choose 5). What is intriguing is that we might find more information about our mushroom kingdom using this because some features might be more collaborative than others. Like bagging we can take the data and aggregate by votes. We will be discussing variable importance in a later chapter, but what is fascinating about decision trees is you can determine what is an important variable and use this to build features.
Pruning Trees
|
75
Writing a Mushroom Classifier For access to all the code contained in this chapter, visit the GitHub repo.
Getting back to our example: what if we were to build something that classifies mush‐ rooms into poisonous or edible based on the various features associated with them? There’s an infinite amount of algorithms we could choose, but since we want some‐ thing that’s easy to understand, decision trees are a good candidate.
Coding and testing design To build this mushroom classifier and regression we have to first build some classes. The basic idea is to feed in mushroom data that has attributes and whether it’s edible. From here we define the following classes (see Figure 5-6): MushroomProblem Implements validation_data for our use in validating the model MushroomRegression
Implements a regression tree MushroomClassifier
A utility class for classification problems MushroomForrest
An implementation of random forests to classify mushrooms MushroomTree
An implementation of a decision tree to classify mushrooms
76
|
Chapter 5: Decision Trees and Random Forests
Figure 5-6. Class design for mushroom problem Testing mushroom classification and regression will take two different forms: square error (for regression) and confusion matrices (Figure 5-7). Confusion matrices are a way of determining how well classification problems work.
Figure 5-7. Confusion matrix example with yes or no answers
Pruning Trees
|
77
Confusion Matrix Confusion matrices are a way of tabulating how well the classifier works. Given two categories, we want to test whether our classifier is right. Reading confusion matrices involves looking at actual and predicted pairs (down the diagonal) for what are classi‐ fied as true classifications. For finding incorrect classifications, any actual and predic‐ ted column row pairs that don’t match are incorrect classifications.
MushroomProblem To write this classifier we first need to do some setup. For this we will rely on Pandas, NumPy, and scikit-learn. You’ll notice that we use a lot of Pandas functions that help put the data into easy-to-use classes and features. Let’s start by defining the problem. Given a datafile of mushroom training data with attributes attached to it, we want to load that into a class that will factorize those attributes into numerical information and output validation data for testing: from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from numpy.random import permutation from numpy import array_split, concatenate from sklearn.metrics import roc_curve, auc, mean_squared_error import pandas as pd import numpy as np class MushroomProblem: def __init__(self, data_file): self.dataFrame = pd.read_csv(data_file) for k in self.dataFrame.columns[1:]: self.dataFrame[k], _ = pd.factorize(self.dataFrame[k]) sorted_cats = sorted(pd.Categorical(self.dataFrame['class']).categories) self.classes = np.array(sorted_cats) self.features = self.dataFrame.columns[self.dataFrame.columns != 'class'] def __factorize(self, data): y, _ = pd.factorize(pd.Categorical(data['class']), sort=True) return y
This sets up the initial class but then we also need a function that outputs a subset of data by having a variable amount of folds: class MushroomProblem: # __init__ def validation_data(self, folds): df = self.dataFrame response = []
78
|
Chapter 5: Decision Trees and Random Forests
assert len(df) > folds perms = array_split(permutation(len(df)), folds) for i in range(folds): train_idxs = range(folds) train_idxs.pop(i) train = [] for idx in train_idxs: train.append(perms[idx]) train = concatenate(train) test_idx = perms[i] training = df.iloc[train] test_data = df.iloc[test_idx] y = self.__factorize(training) classifier = self.train(training[self.features], y) predictions = classifier.predict(test_data[self.features]) expected = self.__factorize(test_data) response.append([predictions, expected]) return response
This first defines the problem we’re trying to solve (i.e., the mushroom classification problem). From here we can take a few different approaches: • A regression • A classifier — Decision tree — Random forest The major difference between them would be how the training method is set up. For instance, with a regression we’d just need: class MushroomRegression(MushroomProblem): def train(self, X, Y): reg = DecisionTreeRegressor() reg = reg.fit(X, Y) return reg def validate(self, folds): responses = [] for y_true, y_pred in self.validation_data(folds): responses.append(mean_squared_error(y_true, y_pred))
Pruning Trees
|
79
return responses
For our classifiers we can define them as such: class MushroomClassifier(MushroomProblem): def validate(self, folds): confusion_matrices = [] for test, training in self.validation_data(folds): confusion_matrices.append(self.confusion_matrix(training, test)) return confusion_matrices def confusion_matrix(self, train, test): return pd.crosstab(test, train, rownames=['actual'], colnames=['preds']) class MushroomForest(MushroomClassifier): def train(self, X, Y): clf = RandomForestClassifier(n_jobs = 2) clf = clf.fit(X, Y) return clf class MushroomTree(MushroomClassifier): def train(self, X, Y): clf = DecisionTreeClassifier() clf = clf.fit(X, Y) return clf
While this is great, it doesn’t really answer the question of how you would go about testing this. How good would this model really hold up?
Testing The best testing method is to stratify our data into cross-validation folds and deter‐ mine whether we are classifying properly. Otherwise we will output a mean squared error. To do this we need some simple code to check: from classifier import MushroomTree, MushroomForest, MushroomRegression data = './data/agaricus-lepiota.data' folds = 5 print "Calculating score for decision tree" tree = MushroomTree(data) print tree.validate(folds) print "Calculating score for random forest method" forest = MushroomForest(data)
80
|
Chapter 5: Decision Trees and Random Forests
print forest.validate(folds) print "Calculating score for regression tree" regression = MushroomRegression(data) print regression.validate(folds)
Running this code shows the following output. In the code, actual is the data inside of the training set. This is the data we hold to be true. preds are results we got out of the model we built: Calculating score for decision tree [preds 0 1 actual 0 844 0 1 0 781, preds 0 1 actual 0 834 0 1 0 791, preds 0 1 actual 0 814 0 1 0 811, preds 0 1 actual 0 855 0 1 0 770, preds 0 1 actual 0 861 0 1 0 763] Calculating score for random forest method [preds 0 1 actual 0 841 0 1 0 784, preds 0 1 actual 0 869 0 1 0 756, preds 0 1 actual 0 834 0 1 0 791, preds 0 1 actual 0 835 0 1 0 790, preds 0 1 actual 0 829 0
Pruning Trees
|
81
1 0 795] Calculating score for regression tree [0.0, 0.0, 0.0, 0.0, 0.0]
What you’ll notice is, given this toy example, we are able to create a decision tree that does exceptionally well. Does that mean we should go out to the woods and eat mushrooms? No, but given the training data and information we gathered, we have built a highly accurate model of mapping mushrooms to either poisonous or edible! The resulting decision tree is actually quite fascinating as you can see in Figure 5-8.
Figure 5-8. The resulting tree from building decision trees I don’t think it’s important to discuss what this tree means, but it is interesting to think of mushroom poisonousness as a function of a handful of decision nodes.
82
|
Chapter 5: Decision Trees and Random Forests
Conclusion In this chapter we learned how to classify data by using decision trees. This can be useful for making hierarchical classifications and when certain attributes determine split points well. We showed that decision trees and random forests are both well suited for classifying mushroom edibility. And remember—don’t use this in the wild to classify mushrooms! Find a mycologist.
Conclusion
|
83
CHAPTER 6
Hidden Markov Models
Intuition informs much of what we do: for example, it tells us that certain words tend to be a certain part of speech, or that if a user visits a signup page, she has a higher probability of becoming a customer. But how would you build a model around intuition? Hidden Markov models (HMMs) are well versed in finding a hidden state of a given system using observations and an assumption about how those states work. In this chapter, we will first talk about how to track user states given their actions, then explore more about what an HMM is, and finally build a part-of-speech tagger using the Brown Corpus. The part-of-speech tagger will tag words in sentences as nouns, pronouns, or any part of speech in the Brown Corpus. HMMs can be either supervised or unsupervised and also are called Markovian due to their reliance on a Markov model. They work well where there doesn’t need to be a lot of historical information built into the model. They also work well for adding localized context to a classification. Unlike what we saw with Naive Bayesian Classifi‐ cation, which relied on a lot of history to determine whether a user is spammy or not, HMMs can be used to predict changes over time in a model.
Tracking User Behavior Using State Machines Have you ever heard of the sales funnel? This is the idea that there are different levels of customer interaction. People will start as prospects and then transition into more engaged states (see Figure 6-1).
85
Figure 6-1. A generalized sales funnel Prospects are “lurkers” who visit the site once or twice but usually don’t engage. Users, on the other hand, like to browse and occasionally make purchases. Customers are quite engaged and have bought something but usually don’t buy a lot in a short time, and thus go back to being users temporarily. Let’s say that we have an online store and determine that out of prospects that visit the site, 15% will sign up, and 5% will become customers right away. When the visitor is already a user, he will cancel his account 5% of the time and buy something 15% of the time. If the visitor is a customer, he will cancel his account only 2% of the time and go back to being a user 95% of the time instead of continually buying things. We could represent the information we have collected in a transition matrix, which shows the probability of going from one state to another, or remaining in the same state (Table 6-1). Table 6-1. Transition probability Prospect
Prospect User Customer 0.80 0.15 0.05
User
0.05
0.80 0.15
Customer 0.02
0.95 0.03
What the transition probability defines is known as a state machine (see Figure 6-2). It also tells us a lot about how our current customers behave. We can determine the conversion rate, attrition rate, and other probabilities. Conversion rate is the proba‐ bility of a prospect signing up, which would be 20%—the probability of going from prospect to user plus the probability of prospect to customer (15% + 5%). You could also determine the attrition rate by taking the average of 5% and 2%, which is 3.5%.
86
|
Chapter 6: Hidden Markov Models
Figure 6-2. User state transition machine This is an uncommon way of displaying user behavior in analytics, because it is too explanatory. But it has one advantage over traditional conversion rate calculations: the ability to look at how a user operates over time. For instance, we could determine the probability of a user being a prospect given the last four times he was in fact a prospect. This is the probability of being a prospect (say 80%) multiplied by the four times they were a prospect before, which were all 80%. The probability that someone keeps viewing the site and never signs up is low, because eventually he might sign up. But there is also one major problem with this model: there is no way for us to reliably determine these states without asking each user individually. The state is hidden from our observation. A user can view the site anonymously. That is actually fine, as you will soon see. As long as we are able to observe interac‐ tion with the site and make a judgment call about the underlying transitions from other sources (think Google Analytics), then we can still solve this problem. We do this by introducing another level of complexity called emissions.
Emissions/Observations of Underlying States With our preceding example, we don’t know when someone goes from being a pros‐ pect to a user to a customer. But we are able to observe what a user is doing and what her behavior is. We know that for a given observation there is a probability that she is in a given state. We can determine the user’s underlying state by observing her emitted behaviors. Let’s say, for instance, that we have five pages on our website: Home, Signup, Product, Checkout, and Contact Us. Now, as you might imagine, some of these pages matter to us and others do not. For instance, Signup would most likely mean the prospect becomes a user, and Checkout means the user becomes a customer. This information gets more interesting because we know the probabilities of states. Let’s say we know the emission and state probabilities shown in Table 6-2.
Emissions/Observations of Underlying States
|
87
Table 6-2. Emission and state probabilities Page name Prospect User Customer Home 0.4 0.3 0.3 Signup
0.1
0.8
0.1
Product
0.1
0.3
0.6
0
0.1
0.9
0.7
0.1
0.2
We know the probability of users switching states as well as the probability of the behavior they are emitting given the underlying state. Given this info, what is the probability that a user who has viewed the Home, Signup, and Product pages becomes a customer? Namely, we want to solve the problem depicted in Figure 6-3.
Figure 6-3. States versus observations To figure this out, we need to determine the probability that a user is in the customer state given all her previous states, or notationally, P(Customer | S1, S2), as well as the probability of the user viewing the product page given that she was a customer multi‐ plied by the probability of signup given the state, or notationally, P(Product_Page | Customer) * P(Signup_Page | S2) * P(Homepage | S1). The problem here is that there are more unknowns than knowns. This finite model is difficult to solve because it involves a lot of calculations. Calculat‐ ing a problem like P(Customer | S1, S2, ⋯, SN) is complicated. To solve this, we need to introduce the Markov assumption. Emissions and observations are used interchangeably in HMM nomenclature. They are the same thing and refer simply to what a process is emitting or what you can observe.
88
| Chapter 6: Hidden Markov Models
Simplification Through the Markov Assumption Remember from the Naive Bayesian Classification that each attribute would inde‐ pendently add to the probability of some events. So for spam, the probability would be independently conditional on words or phrases like Prince and Buy now. In the model that we’re building with user behavior, though, we do want dependence. Mainly, we want the previous state to be part of the next state’s probability. In fact, we would assert that the previous states have a relationship to the user’s current state. In the case of Naive Bayesian Classification, we would make the assumption that the probability of something was independently conditional on other events. So spam was independently conditional on each word in the email. We can do the same with our current system. We can state that the probability of being in a particular state is primarily based on what happened in the previous state. So instead of P(Customer | S1, S2, ⋯, SN), our equation would be P(Customer | SN). But why can we get away with such a gross simplification? Given a state machine like the one we have just defined, the system infers probabilis‐ tically and recursively where you have been in the past. For instance, if a site visitor were in the customer state, then you could say that the most probable previous state would be user, and that the most probable state before that would be prospect. This simplification also has one exciting conclusion, which leads us into our next topic: Markov chains.
Using Markov Chains Instead of a Finite State Machine We have been talking purely about one system, and only one outcome, thus far. But what is powerful about the Markov assumption is that you can model a system as it operates forever. Instead of looking locally at what the process is going to do, we can figure out how the system will always behave. This brings us to the idea of a Markov chain. Markov chains are exceptional at simulating systems. Queuing theory, finance, weather modeling, and game theory all make heavy use of Markov chains. They are powerful because they represent behaviors in a concise way. Unlike models such as neural networks, which can become extremely complex as we add nodes, HMMs only rely on a few probability matrices; they are extremely useful at modeling system behaviors. Markov chains can analyze and find information within an underlying process that will operate forever. But that still doesn’t solve our fundamental problem, which is that we still need to determine what state a given person is in given his hidden previ‐ ous state and our own observations. For that, we will need to enhance Markov chains with a hidden aspect. Simplification Through the Markov Assumption
|
89
Hidden Markov Model We’ve talked a lot about observation and underlying state transitions, but now we’re almost back to where we started. We still need to figure out what a user’s state is. To do this, we will use a Hidden Markov Model, which comprises these three components: Evaluation How likely is it that a sequence like Home → Signup → Product → Checkout will come from our transition and observation of users? Decoding Given this sequence, what does the most likely underlying state sequence look like? Learning Given an observed sequence, what will the user most likely do next? In the following sections, we will discuss these three elements in detail. First, we’ll talk about using the Forward-Backward algorithm to evaluate a sequence of observations. Then we will delve into how to solve the decoding problem with the Viterbi algo‐ rithm, which works on a conceptual level. Last, we’ll touch on the idea of learning as an extension of decoding.
Evaluation: Forward-Backward Algorithm Evaluation is a question of figuring out how probable a given sequence is. This is important in determining how likely it is that your model actually created the sequence that you are modeling. It can also be quite useful for determining, for exam‐ ple, if the sequence Home→Home is more probable than Home→Signup. We per‐ form the evaluation step by using the Forward-Backward algorithm. This algorithm’s goal is to figure out what the probability of a hidden state is subject to the observa‐ tions. This is effectively saying that, given some observations, what is the probability that happened?
Mathematical Representation of the Forward-Backward Algorithm The Forward-Backward algorithm is the probability of an emission happening given its underlying states—that is, P(ek | s). At first glance, this looks difficult because you would have to compute a lot of probabilities to solve it. If we used the chain rule, this could easily become expansive. Fortunately, we can use a simple trick to solve it instead. The probability of ek given an observation sequence is proportional to the joint distri‐ bution of ek and the observations: 90
|
Chapter 6: Hidden Markov Models
p ek ∣ s ∝ p ek, s
which we can actually split into two separate pieces using the probability chain rule: p sk + 1, sk + 2, ⋯, sn ∣ ek, s1, s2, ⋯, sk p ek, s1, s2, ⋯, sk
This looks fruitless, but we can actually forget about x1, ⋯ , xk in the first probability because the probabilities are D-Separated. I won’t discuss D-Separation too much, but because we’re asserting the Markov assumption in our model we can effectively forget about these variables, because they precede what we care about in our probability model: p ek ∣ s ∝ p sk + 1, sk + 2, ⋯, sn ∣ ek p ek, s1, s2, ⋯, sk
This is the Forward-Backward algorithm! Graphically, you can imagine this to be a path through this probability space (see Figure 6-4). Given a specific emission at, say, index 2, we could calculate the probabil‐ ity by looking at the forward and backward probabilities.
Figure 6-4. States versus emissions The forward term is looking at the joint probability of the hidden state at point k given all the emissions up to that point. The backward term is looking at the condi‐ tional probability of emissions from k+1 to the end given that hidden point.
Using User Behavior Using our preceding example of Home→Signup→Product→Checkout, let’s calculate the probability of that sequence happening inside our model using the ForwardBackward algorithm. First let’s set up the problem by building a class called Forward Backward: class ForwardBackward: def __init__():
Evaluation: Forward-Backward Algorithm
|
91
self.observations = ['homepage', 'signup', 'product', 'checkout'] self.states = ['Prospect', 'User', 'Customer'] self.emissions = ['homepage', 'signup', 'product page', 'checkout',\ 'contact us'] self.start_probability = { 'Prospect': 0.8, 'User': 0.15, 'Customer': 0.05 } self.transition_probability = np.array([ [0.8, 0.15, 0.05], [0.05, 0.80, 0.15], [0.02, 0.95, 0.03] ]) self.emission_probability = np.array([ [0.4, 0.3, 0.3], # homepage [0.1, 0.8, 0.1], # signup [0.1, 0.3, 0.6], # product page [0, 0.1, 0.9], # checkout [0.7, 0.1, 0.2] # contact us ]) self.end_state = 'Ending'
Here we are simply importing the information that we had from before—that is, the transition probability matrix and the emission probabilities. Next, we need to define our foward step, which is: class ForwardBackward: # __init__ def forward(): forward = [] f_previous = {} for(i in xrange(1, len(self.observations))): f_curr = {} for(state in self.states): if i == 0: prev_f_sum = self.start_probability[state] else: prev_f_sum = 0.0 for (k in self.states): prev_f_sum += f_previous.get(k, 0.0) * \ self.transition_probability[k][state] f_curr[state] = self.emission_probability[state][self.observations[i]] f_curr[state] = f_curr[state] * prev_f_sum forward.append(f_curr) f_previous = f_curr
92
|
Chapter 6: Hidden Markov Models
p_fwd = 0.0 for(k in self.states): p_fwd += f_previous[k] * self.transition_probability[k][self.end_state] {'probability': p_fwd, 'sequence': forward}
The forward algorithm will go through each state at each observation and multiply them together to get a forward probability of how the state works in this given con‐ text. Next, we need to define the backward algorithm, which is: class ForwardBackward: # __init__ # forward def backward(): backward = [] b_prev = {} for(i in xrange(len(self.observations), 0, -1)): b_curr = {} for(state in self.states): if i == 0: b_curr[state] = self.transition_probability[state][self.end_state] else: sum = 0.0 for (k in self.states): sum += self.transition_probability[state][k]* \ self.emission_probability[k][self.observations[x_plus]] * \ b_prev[k] backward.insert(0, b_curr) b_prev = b_curr p_bkw = 0.0 for (s in self.states): sum += self.start_probability[s] * \ self.emission_probability[s][self.observations[0]] * \ b_prev[s] {'probability': p_bkw, 'sequence': backward}
The backward algorithm works pretty much the same way as the forward one, except that it goes the opposite direction. Next, we need to try both forward and backward and assert that they are the same (otherwise, our algorithm is wrong): class ForwardBackward: # __init__ # forward # backward def forward_backward(): size = len(self.observations) forward = forward() backward = backward()
Evaluation: Forward-Backward Algorithm
|
93
posterior = {} for(s in self.states): posterior[s] = [] for (i in xrange(1, size)): value = forward['sequence'][i][s] * \ backward['sequence'][i][s] / forward['probability']) posterior[s].append() return [forward, backward, posterior]
The beauty of the Forward-Backward algorithm is that it’s effectively testing itself as it runs. This is quite exciting. It will also solve the problem of evaluation—remember, that means figuring out how probable a given sequence is likely to be. Next, we’ll delve into the decoding problem of figuring out the best sequence of underlying states.
The Decoding Problem Through the Viterbi Algorithm The decoding problem is the easiest to describe. Given a sequence of observations, we want to parse out the best path of states given what we know about them. Mathemati‐ cally, what we want to find is some specific π* = arg max π P(x, π), where π is our state vector and x is the observations. To achieve this, we use the Viterbi algorithm. You can think of this as a way of con‐ structing a maximum spanning tree. We are trying to figure out, given our current state, what is the best path to approach next. Similar to any sort of greedy algorithm, the Viterbi algorithm just iterates through all possible next steps and takes it. Graphically, it would look something like Figure 6-5.
Figure 6-5. Viterbi algorithm
94
|
Chapter 6: Hidden Markov Models
What we see in this figure is how a state like S1 will become less relevant over time, while a state of S3 becomes even more relevant compared to the others. The arrows are shaded to show the probability dampening. What we are attempting to do with this algorithm is traverse a set of states in the most optimal way. We do this by determining the probability that a state will happen given its emissions as well as the probability that it will transition from the previous state to the current. Then we multiply those two together to get the probability that the sequence will happen. Iterating through the entire sequence, we eventually find our optimal sequence.
The Learning Problem The learning problem is probably the simplest algorithm to implement. Given a sequence of states and observations, what is the most likely to happen next? We can do that purely by figuring out the next step in the Viterbi sequence. We figure out the next state by maximizing the next step given the fact there is no emission available yet. But you can figure out the most probable emission from there as well as the most probable state, and that is known as the next optimal state emission combo. If this way of solving doesn’t make sense yet, don’t worry: in the next section, we will delve further into using the Viterbi algorithm. Unfortunately, there isn’t any free and easily accessible data available for analyzing user behaviors over time given page views, but there is a similar problem we can solve by using a part-of-speech tagger built purely using a Hidden Markov Model.
Part-of-Speech Tagging with the Brown Corpus Given the phrase “the quick brown fox,” how would you tag its parts of speech? We know that English has parts of speech like determiners, adjectives, and nouns. We would probably tag the words in this phrase as determiner, adjective, adjective, noun, respectively. We could fairly easily tag this example because we have a basic under‐ standing of grammar, but how could we train an algorithm to do so? Well, of course because this is a chapter on HMMs, we’ll use one to figure out the optimal parts of speech. Knowing what we know about them, we can use the Viterbi algorithm to figure out, for a given sequence of words, what is the best tagging sequence. For this section, we will rely on the Brown Corpus, which was the first elec‐ tronic corpus. It has over a million annotated words with parts of speech in it. The list of tags is long, but rest assured that it contains all the normal tags like adjectives, nouns, and verbs. The Brown Corpus is set up using a specific kind of annotation. For each sequence of words, you will see something like this: The Learning Problem
|
95
Most/ql important/jj of/in all/abn ,/, the/at less/ql developed/vbn countries/nns must/md be/be persuaded/vbn to/to take/vb the/at necessary/jj steps/nns to/to allo‐ cate/vb and/cc commit/vb their/pp$ own/jj resources/nns ./.
In this case, Most is ql, which means qualifier, important is jj (adjective), and so on until you reach ./. which is a period tagged as a stop: “ . ”. The only thing that this doesn’t have is a START character at the beginning. Generally speaking, when we’re writing Markov models, we want the word at t and also the word at t – 1. Because most is at the front, there is no word before it, so therefore we just use a special name, START, to show that there is a start to this sequence. That way we can measure the probability of going from START to a qualifier.
Setup Notes All of the code we’re using for this example can be found on GitHub. Python is constantly changing, so the README file is the best place to find out how to run the examples. There are no other dependencies for getting this example to run with Python.
Coding and Testing Design The overall approach we will be taking to write our part-of-speech tagger is to have two classes (Figure 6-6): CorpusParser
This class is responsible for parsing the Brown Corpus. POSTagger
This class is responsible for tagging new data given the corpus training data.
Figure 6-6. Class diagram for part-of-speech tagger Our tests in this case will focus on writing good seam tests around the Brown Corpus and cross-validating an error rate that is acceptable.
96
|
Chapter 6: Hidden Markov Models
The Seam of Our Part-of-Speech Tagger: CorpusParser The seam of a part-of-speech tagger is how you feed it data. The most important point is to feed it proper information so the part-of-speech tagger can utilize and learn from that data. First we need to make some assumptions about how we want it to work. We want to store each transition from a word tag combo in an array of two and then wrap that array in a simple class called CorpusParser::TagWord. Our initial test looks like this: import unittest class CorpusParserTest(unittest.TestCase): def setUp(): self.stream = "\tSeveral/ap defendants/nns ./.\n" self.blank = "\t \n" def it_parses_a_line(self): cp = CorpusParser() null = cp.TagWord(word = "START", tag = "START") several = cp.TagWord(word = "Several", tag = "ap") defendants = cp.TagWord(word = "defendants", tag = "nns") period = cp.TagWord(word = ".", tag =".") expectations = [ [null, several], [several, defendants], [defendants, period] ] for (token in cp.parse(self.stream)): self.assertEqual(token, expectations.pop(0)) self.assertEqual(len(expectations), 0) def it_doesnt_allow_blank_lines(self): cp = CorpusParser() for(token in cp.parse(self.blank)): raise Exception("Should never happen")
This code takes two cases that are Brown Corpus–like and checks to make sure they are being parsed properly. The first case is whether we can parse stream correctly into tokens. The second case is a gut check to make sure it ignores blank lines, as the Brown Corpus is full of them. Filling in the CorpusParser class, we would have something that initially looks like this: class CorpusParser: NULL_CHARACTER = "START" STOP = "\n"
Part-of-Speech Tagging with the Brown Corpus
|
97
SPLITTER = "/" class TagWord: def __init__(self, **kwargs): setattr(self, 'word', kwargs['word']) setattr(self, 'tag', kwargs['tag']) def __init__(self): self.ngram = 2 def __iter__(self): return self def next(self): char = self.file.read(1) if self.stop_iteration: raise StopIteration if not char and self.pos != '' and self.word != '': self.ngrams.pop(0) self.ngrams.append(TagWord(word = self.word, tag = self.tag)) self.stop_iteration = True return self.ngrams if char == "\t" or (self.word == "" && STOP.contains(char)): return None elif char == SPLITTER: self.parse_word = false elif STOP.contains(char): self.ngrams.pop(0) self.ngrams.append(TagWord(word = self.word, tag = self.pos)) self.word = '' self.pos = '' self.parse_word = True return self.ngrams elif self.parse_word: self.word += char else: self.pos += char def parse(file): self.ngrams = [ TagWord(NULL_CHARACTER, NULL_CHARACTER), TagWord(NULL_CHARACTER, NULL_CHARACTER) ] self.word = '' self.pos = '' self.parse_word = True self.file = file
98
|
Chapter 6: Hidden Markov Models
return self
As in the previous chapters, implementing a parser using each_char is generally the most performant way of parsing things in Python. Now we can get into the much more interesting part: writing the part-of-speech tagger.
Writing the Part-of-Speech Tagger At this point we are ready to write our part-of-speech tagger class. To do this we will have to take care of the following: 1. Take data from the CorpusParser 2. Store it internally so we can calculate the probabilities of word tag combos 3. Do the same for tag transitions We want this class to be able to tell us how probable a word and tag sequence is, and to determine from a plaintext sentence what the optimal tag sequence is. To be able to do that, we need to tackle calculating probabilities first, then calculate the probability of a tag sequence with a word sequence. Last, we’ll implement the Viterbi algorithm. Let’s talk about the probability of a tag given its previous tag. Using something called a maximum likelihood estimate, we can assert that the probability should equal the count of the two tags together divided by the count of the previous tag. A test for that would look like this: from collections import defaultdict class POSTagger: def __init__(self, data_io): self.corpus_parser = CorpusParser() self.data_io = data_io self.trained = False def train(): if not self.trained: self.tags = set(["Start"]) self.tag_combos = defaultdict(lambda: 0, {}) self.tag_frequencies = defaultdict(lambda: 0, {}) self.word_tag_combos = defaultdict(lambda: 0, {}) for(io in self.data_io): for(line in io.readlines()): for(ngram in self.corpus_parser.parse(line)): write(ngram) self.trained = True def write(ngram):
Part-of-Speech Tagging with the Brown Corpus
|
99
if ngram[0].tag == 'START': self.tag_frequencies['START'] += 1 self.word_tag_combos['START/START'] += 1 self.tags.append(ngram[-1].tag) self.tag_frequencies[ngram[-1].tag] += 1 self.word_tag_combos["/".join([ngram[-1].word, ngram[-1].tag])] += 1 self.tag_combos["/".join([ngram[0].tag, ngram[-1].tag])] += 1 def tag_probability(previous_tag, current_tag): denom = self.tag_frequencies[previous_tag] if denom == 0: 0.0 else: self.tag_combos["/".join(previous_tag, current_tag)] / float(denom)
Remember that the sequence starts with an implied tag called START. So here you see the probability of D transitioning to D is in fact two divided by three, because D tran‐ sitions to D three times but D shows up three times in that sequence.
Default Dictionaries Dictionaries in Python are similar to associative arrays, hashes, or hashmaps. The general concept is the same: store some sort of key value pair in a data structure. Default dictionaries, on the other hand, take it a bit further. In most cases when defin‐ ing a dictionary in Python, if you ask for something that isn’t in the dictionary you would get a KeyError. Instead, with default dictionaries you can set a default to always return. from collections import defaultdict dictionary = {'a': 'b'} dictionary['b'] # Yields KeyError default_dictionary = defaultdict(lambda: 0, dictionary) default_dictionary['b'] == 0
You’ll notice that we’re doing a bit of error handling for the case when zeros happen, because we will throw a divide-by-zero error. Next, we need to address the probability of word tag combinations, which we can do by introducing the following to our exist‐ ing test: import StringIO class TestPOSTagger(unittest.TestCase): def setUp(): self.stream = StringIO("A/B C/D C/D A/D A/B ./.") self.pos_tagger = POSTagger([StringIO.StringIO(self.stream)])
100
|
Chapter 6: Hidden Markov Models
self.pos_tagger.train() def it_calculates_probability_of_word_and_tag(self): self.assertEqual(self.pos_tagger.word_tag_probability("Z", "Z"), 0) # A and B happens 2 times, count of b happens twice therefore 100% self.assertEqual(self.pos_tagger.word_tag_probability("A", "B"), 1) # A and D happens 1 time, count of D happens 3 times so 1/3 self.assertEqual(self.pos_tagger.word_tag_probability("A", "D"), 1.0/3.0) # START and START happens 1, time, count of start happens 1 so 1 self.assertEqual(self.pos_tagger.word_tag_probability("START", "START"), 1) self.assertEqual(self.pos_tagger.word_tag_probability(".", "."), 1)
To make this work in the POSTagger class, we need to write the following: class POSTagger: # __init__ # train # write # tag_probability def word_tag_probability(word, tag): denom = self.tag_frequencies[tag] if denom == 0: 0.0 else: self.word_tag_combos["/".join(word, tag)] / float(denom)
Now that we have those two things—word_tag_probability and tag_probability —we can answer the question: given a word and tag sequence, how probable is it? That is the probability of the current tag given the previous tag, multiplied by the word given the tag. In a test, it looks like this: class TestPOSTagger(unittest.TestCase): # setUp # it_calculates_probability_of_word_and_tag def it_calculates_probability_of_words_and_tags(self): words = ['START', 'A', 'C', 'A', 'A', '.'] tags = ['START', 'B','D','D','B','.'] tagger = self.pos_tagger tag_probabilities = reduce( (lambda x, y: x * y), [ tagger.tag_probability("B", "D"), tagger.tag_probability("D", "D"), tagger.tag_probability("D", "B"), tagger.tag_probability("B", ".") ])
Part-of-Speech Tagging with the Brown Corpus
|
101
word_probabilities = reduce( (lambda x, y: x * y), [ tagger.word_tag_probability("A", "B"), # 1 tagger.word_tag_probability("C", "D"), tagger.word_tag_probability("A", "D"), tagger.word_tag_probability("A", "B"), # 1 ]) expected = word_probabilities * tag_probabilities self.assertEqual(tagger.probability_of_word_tag(words,tags), expected)
So basically we are calculating word tag probabilities multiplied by the probability of tag transitions. We can easily implement this in the POSTagger class using the following: class POSTagger: # __init__ # train # write # tag_probability # word_tag_probability def probability_of_word_tag(word_sequence, tag_sequence): if len(word_sequence) != len(tag_sequence): raise Exception('The word and tags must be the same length!') length = len(word_sequence) probability = 1.0 for (i in xrange(1, length)): probability *= ( tag_probability(tag_sequence[i - 1], tag_sequence[i]) * word_tag_probability(word_sequence[i], tag_sequence[i]) ) probability
Now we can figure out how probable a given word and tag sequence is. But it would be better if we were able to determine, given a sentence and training data, what the optimal sequence of tags is. For that, we need to write this simple test: class TestPOSTagger(unittest.TestCase): # setUp # it_calculates_probability_of_word_and_tag # it_calculates_probability_of_words_and_tags(self): def viterbi(self): training = "I/PRO want/V to/TO race/V ./. I/PRO like/V cats/N ./." sentence = 'I want to race.' tagger = self.pos_tagger expected = ['START', 'PRO', 'V', 'TO', 'V', '.'] self.assertEqual(pos_tagger.viterbi(sentence), expected)
102
| Chapter 6: Hidden Markov Models
This test takes a bit more to implement because the Viterbi algorithm is somewhat involved. So let’s go through this step by step. The first problem is that our method accepts a string, not a sequence of tokens. We need to split by whitespace and treat stop characters as their own word. So to do that, we write the following to set up the Viterbi algorithm: import re class POSTagger: #__init__ # train # write # tag_probability # word_tag_probability # probability_of_word_tag def viterbi(sentence): parts = re.sub(r"([\.\?!])", r" \1", sentence)
The Viterbi algorithm is an iterative algorithm, meaning at each step it figures out where it should go next based on the previous answer. So we will need to memoize the previous probabilities as well as keep the best tag. We can initialize and figure out what the best tag is as follows: class POSTagger: #__init__ # train # write # tag_probability # word_tag_probability # probability_of_word_tag def viterbi(sentence): # parts last_viterbi = {} backpointers = ["START"] for (tag in self.tags): if tag == 'START': next() else: probability = tag_probability('START', tag) * \ word_tag_probability(parts[0], tag) if probability > 0: last_viterbi[tag] = probability backpointers.append( max(v for v in last_viterbi.itervalues()) or max(v for v in self.tag_frequencies.itervalues()) )
Part-of-Speech Tagging with the Brown Corpus
|
103
At this point, last_viterbi has only one option, which is {PRO: 1.0}. That is because the probability of transitioning from START to anything else is zero. Likewise, back pointers will have START and PRO in it. So, now that we’ve set up our initial step, all we need to do is iterate through the rest: class POSTagger: #__init__ # train # write # tag_probability # word_tag_probability # probability_of_word_tag def viterbi(sentence): # parts # initialization for(part in parts[1:]): viterbi = {} for(tag in self.tags): if tag == 'START': next() if last_viterbi: break best_previous = max( for((prev_tag, probability) in last_viterbi.iteritems()): probability * \ tag_probability(prev_tag, tag) * \ word_tag_probability(part,tag) ) best_tag = best_previous[0] probability = last_viterbi[best_tag] * \ tag_probability(best_tag, tag) * \ word_tag_probability(part, tag) if probability > 0: viterbi[tag] = probability
last_viterbi = viterbi backpointers → , you will find that in fact this turns out to be linear (Figure 7-5). Now you can see that these two circles are separate and you can draw a plane easily between the two. If you took that and mapped it back to the original plane, then there would in fact be a third circle in the middle that is a straight plane.
112
|
Chapter 7: Support Vector Machines
Figure 7-5. Separating two circles using a kernel trick Next time you need a bar trick, try that out on someone. This doesn’t just work for circles, but unfortunately the visualizations of four or more dimensions are confusing so I left them out. There are many different types of projec‐ tions (or kernels) such as: • Polynomial kernel (heterogeneous and homogeneous) • Radial basis functions • Gaussian kernels I do encourage you to read up more on kernels, although they will most likely distract us from the original intent of this section! The Theory Behind SVMs
|
113
One downside to using kernels, though, is that you can easily overfit data. In a lot of ways they operate like splines. But one way to avoid overfitting is to introduce slack.
Optimizing with Slack What if our data isn’t separable by a line? Luckily mathematicians have thought about this, and in mathematical optimization there’s a concept called “slack.” This idea introduces another variable that is minimized but reduces the worry of overfit. In practice, with SVMs the amount of slack is determined by a free parameter C, which could be thought of as a way to tell the algorithm how much slack to add or not (Figure 7-6).
Figure 7-6. Slack introduced into model. The highlighted faces are basically wrong or incorrect data points. As I discussed in Chapter 3, overfitting is a downfall of machine learning and induc‐ tive biases, so avoiding it is a good thing to do. Okay, enough theory—let’s build a sentiment analyzer.
Sentiment Analyzer In this section, we’ll build a sentiment analyzer that determines the sentiment of movie reviews. The example we’ll use also applies to working with support tickets. We’ll first talk about what this tool will look like conceptually in a class diagram. Then, after identifying the pieces of the tool, we will build a Corpus class, a CorpusSet class, and a SentimentClassifier class. The Corpus and CorpusSet classes involve transforming the text into numerical information. SentimentClassifier is where we will then use the SVM algorithm to build this sentiment analyzer.
Setup Notes All of the code we are using for this example can be found on the thoughtfulml reposi‐ tory on GitHub. 114
|
Chapter 7: Support Vector Machines
Python is constantly changing, so the README file is the best place to get up to speed on running the examples. There are no additional dependencies beyond a running Python version to run this example.
Coding and Testing Design In this section we will be building three classes to support classifying incoming text to either positive or negative sentiment (Figure 7-7). Corpus
This class will parse sentiment text and store as a corpus with frequencies in it. CorpusSet
This is a collection of multiple corpora that each have a sentiment attached to it. SentimentClassifier Utilizes a CorpusSet to train and classify sentiment.
Figure 7-7. Class diagram for movie-review sentiment analyzer
Sentiment Analyzer
|
115
What Do Corpus and Corpora Mean? Corpus, like corpse, means a body, but in this case it’s a body of writings. This word is used heavily in the natural-language process‐ ing community to signal a big group of previous writings that can be used to infer knowledge. In our example, we are using corpus to refer to a body of writings around a certain sentiment. Corpora is the plural of corpus.
Testing in SVMs primarily deals with setting a threshold of acceptance with accuracy and then tweaking the model until it works well enough. That is the concept we will apply in this chapter.
SVM Testing Strategies Besides the normal TDD affair of writing unit tests for our seams and building a solid code basis, there are additional testing considerations for SVMs: • Speed of training the model before and after configuration changes • Confusion matrix and precision and recall • Sensitivity analysis (correction cascades, configuration debt) I will talk about these concerns through this section.
Corpus Class Our Corpus class will handle the following: • Tokenizing text • Sentiment leaning, whether :negative or :positive • Mapping from sentiment leaning to a numerical value • Returning a unique set of words from the corpus When we write the seam test for this, we end up with the following: from StringIO import StringIO import unittest from corpus import Corpus class TestCorpusSet(unittest.TestCase): def setUp(self): self.negative = StringIO('I hated that so much') self.negative_corpus = Corpus(self.negative, 'negative') self.positive = StringIO('loved movie!! loved') self.positive_corpus = Corpus(self.positive, 'positive')
116
|
Chapter 7: Support Vector Machines
def test_trivial(self): """consumes multiple files and turns it into sparse vectors""" self.assertEqual('negative', self.negative_corpus.sentiment) def test_tokenize1(self): """downcases all the word tokens""" self.assertListEqual(['quick', 'brown', 'fox'], \ Corpus.tokenize('Quick Brown Fox')) def test_tokenize2(self): """ignores all stop symbols""" self.assertListEqual(['hello'], Corpus.tokenize('"\'hello!?!?!.\'"
'))
def test_tokenize3(self): """ignores the unicode space""" self.assertListEqual(['hello', 'bob'], Corpus.tokenize(u'hello\u00A0bob')) def test_positive(self): """consumes a positive training set""" self.assertEqual('positive', self.positive_corpus.sentiment) def test_words(self): """consumes a positive training set and unique set of words""" self.assertEqual({'loved', 'movie'}, self.positive_corpus.get_words()) def test_sentiment_code_1(self): """defines a sentiment_code of 1 for positive""" self.assertEqual(1, Corpus(StringIO(''), 'positive').sentiment_code) def test_sentiment_code_minus1(self): """defines a sentiment_code of 1 for positive""" self.assertEqual(-1, Corpus(StringIO(''), 'negative').sentiment_code)
StringIO makes strings look like IO objects, which makes it easy to
test file IO–type operations on strings.
As you learned in Chapter 3, there are many different ways of tokenizing text, such as extracting out stems, frequency of letters, emoticons, and words. For our purposes, we will just tokenize words. These are defined as strings between nonalpha charac‐ ters. So out of a string like “The quick brown fox” we would extract the, quick, brown, fox (Figure 7-8). We don’t care about punctuation and we want to be able to skip Uni‐ code spaces and nonwords.
Sentiment Analyzer
|
117
Figure 7-8. The many ways of tokenizing text Writing the Corpus class, we end up with: import re class Corpus(object): skip_regex = re.compile(r'[\'"\.\?\!]+') space_regex = re.compile(r'\s', re.UNICODE) stop_words = [x.strip() for x in open('data/stopwords.txt').readlines()] sentiment_to_number = {'positive': 1, 'negative': -1} @classmethod def tokenize(cls, text): cleared_text = cls.skip_regex.sub('', text) parts = cls.space_regex.split(cleared_text) parts = [part.lower() for part in parts] return [p for p in parts if len(p) > 0 and p not in cls.stop_words] def __init__(self, io, sentiment): self._io = io self._sentiment = sentiment self._words = None @property def sentiment(self): return self._sentiment @property def sentiment_code(self): return self.sentiment_to_number[self._sentiment] def get_words(self): if self._words is None: self._words = set() for line in self._io:
118
| Chapter 7: Support Vector Machines
for word in Corpus.tokenize(line): self._words.add(word) self._io.seek(0) return self._words def get_sentences(self): for line in self._io: yield line
Now to create our next class, CorpusSet.
CorpusSet Class The CorpusSet class brings multiple corpora together and gives us a good basis to use SVMs: from StringIO import StringIO import unittest from numpy import array from scipy.sparse import csr_matrix from corpus import Corpus from corpus_set import CorpusSet class TestCorpusSet(unittest.TestCase): def setUp(self): self.positive = StringIO('I love this country') self.negative = StringIO('I hate this man') self.positive_corp = Corpus(self.positive, 'positive') self.negative_corp = Corpus(self.negative, 'negative') self.corpus_set = CorpusSet([self.positive_corp, self.negative_corp]) def test_compose(self): """composes two corpuses together""" self.assertEqual({'love', 'country', 'hate', 'man'}, self.corpus_set.words) def test_sparse(self): """returns a set of sparse vectors to train on""" expected_ys = [1, -1] expected_xes = csr_matrix(array( [[1, 1, 0, 0], [0, 0, 1, 1]] )) self.corpus_set.calculate_sparse_vectors() ys = self.corpus_set.yes xes = self.corpus_set.xes
Sentiment Analyzer
|
119
self.assertListEqual(expected_ys, ys) self.assertListEqual(list(expected_xes.data), list(xes.data)) self.assertListEqual(list(expected_xes.indices), list(xes.indices)) self.assertListEqual(list(expected_xes.indptr), list(xes.indptr))
To make these tests pass, we need to build a CorpusSet class that takes in multiple corpora, transforms all of that into a matrix of features, and has the properties words, xes, and yes (the latter for x’s and y’s). Let’s start by building a CorpusSet class: import numpy as np from scipy.sparse import csr_matrix, vstack from corpus import Corpus class CorpusSet(object): def __init__(self, corpora): self._yes = None self._xes = None self._corpora = corpora self._words = set() for corpus in self._corpora: self._words.update(corpus.get_words()) @property def words(self): return self._words @property def xes(self): return self._xes @property def yes(self): return self._yes
This doesn’t do much except store all of the words in a set for later use. It does that by iterating the corpora and storing all the unique words. From here we need to calcu‐ late the sparse vectors we will use in the SVM, which depends on building a feature matrix composed of feature vectors: class CorpusSet # __init__ # words # xes # yes def calculate_sparse_vectors(self):
120
|
Chapter 7: Support Vector Machines
self._yes = [] self._xes = None for corpus in self._corpora: vectors = self.feature_matrix(corpus) if self._xes is None: self._xes = vectors else: self._xes = vstack((self._xes, vectors)) self._yes.extend([corpus.sentiment_code] * vectors.shape[0]) def feature_matrix(self, corpus): data = [] indices = [] indptr = [0] for sentence in corpus.get_sentences(): sentence_indices = self._get_indices(sentence) indices.extend(sentence_indices) data.extend([1] * len(sentence_indices)) indptr.append(len(indices)) feature_matrix = csr_matrix((data, indices, indptr), shape=(len(indptr) - 1, len(self._words)), dtype=np.float64) feature_matrix.sort_indices() return feature_matrix def feature_vector(self, sentence): indices = self._get_indices(sentence) data = [1] * len(indices) indptr = [0, len(indices)] vector = csr_matrix((data, indices, indptr), shape=(1, len(self._words)), dtype=np.float64) return vector def _get_indices(self, sentence): word_list = list(self._words) indices = [] for token in Corpus.tokenize(sentence): if token in self._words: index = word_list.index(token) indices.append(index) return indices
At this point we should have enough to validate our model using cross-validation. For that we will get into building the actual sentiment classifier as well as model validation.
Sentiment Analyzer
|
121
Model Validation and the Sentiment Classifier Now we can get to writing the cross-validation unit test, which will determine how well our classification works. We do this by having two different tests. The first has an error rate of 35% or less and ensures that when it trains and validates on the same data, there is zero error: from fractions import Fraction import os import unittest from sentiment_classifier import SentimentClassifier class TestSentimentClassifier(unittest.TestCase): def setUp(self): pass def test_validate(self): """cross validates with an error of 35% or less""" neg = self.split_file('data/rt-polaritydata/rt-polarity.neg') pos = self.split_file('data/rt-polaritydata/rt-polarity.pos') classifier = SentimentClassifier.build([ neg['training'], pos['training'] ]) c = 2 ** 7 classifier.c = c classifier.reset_model() n_er = self.validate(classifier, neg['validation'], 'negative') p_er = self.validate(classifier, pos['validation'], 'positive') total = Fraction(n_er.numerator + p_er.numerator, n_er.denominator + p_er.denominator) print total self.assertLess(total, 0.35) def test_validate_itself(self): """yields a zero error when it uses itself""" classifier = SentimentClassifier.build([ 'data/rt-polaritydata/rt-polarity.neg', 'data/rt-polaritydata/rt-polarity.pos' ]) c = 2 ** 7 classifier.c = c classifier.reset_model() n_er = self.validate(classifier, 'data/rt-polaritydata/rt-polarity.neg', 'negative')
122
|
Chapter 7: Support Vector Machines
p_er = self.validate(classifier, 'data/rt-polaritydata/rt-polarity.pos', 'positive') total = Fraction(n_er.numerator + p_er.numerator, n_er.denominator + p_er.denominator) print total self.assertEqual(total, 0)
In the second test we use two utility functions, which could also be achieved using either scikit-learn or other packages: class TestSentimentClassifier(unittest.TestCase): def validate(self, classifier, file, sentiment): total = 0 misses = 0 with(open(file, 'rb')) as f: for line in f: if classifier.classify(line) != sentiment: misses += 1 total += 1 return Fraction(misses, total) def split_file(self, filepath): ext = os.path.splitext(filepath)[1] counter = 0 training_filename = 'tests/fixtures/training%s' % ext validation_filename = 'tests/fixtures/validation%s' % ext with(open(filepath, 'rb')) as input_file: with(open(validation_filename, 'wb')) as val_file: with(open(training_filename, 'wb')) as train_file: for line in input_file: if counter % 2 == 0: val_file.write(line) else: train_file.write(line) counter += 1 return {'training': training_filename, 'validation': validation_filename}
What this test does is validate that our model has a high enough accuracy to be useful. Now we need to write our SentimentClassifier, which involves building a class that will respond to: build
This class method will build a SentimentClassifier off of files instead of a
CorpusSet.
Sentiment Analyzer
|
123
present_answer
This method will take the numerical representation and output something useful to the end user. c
This returns the C parameter that determines how wide the error bars are on SVMs. reset_model
This resets the model. words
This returns words. fit_model
This does the big lifting and calls into the SVM library that scikit-learn wrote. classify
This method classifies whether the string is negative or positive sentiment. import os from numpy import ndarray from sklearn import svm from corpus import Corpus from corpus_set import CorpusSet class SentimentClassifier(object): ext_to_sentiment = {'.pos': 'positive', '.neg': 'negative'} number_to_sentiment = {-1: 'negative', 1: 'positive'} @classmethod def present_answer(cls, answer): if isinstance(answer, ndarray): answer = answer[0] return cls.number_to_sentiment[answer] @classmethod def build(cls, files): corpora = [] for file in files: ext = os.path.splitext(file)[1] corpus = Corpus(open(file, 'rb'), cls.ext_to_sentiment[ext]) corpora.append(corpus) corpus_set = CorpusSet(corpora)
124
|
Chapter 7: Support Vector Machines
return SentimentClassifier(corpus_set) def __init__(self, corpus_set): self._trained = False self._corpus_set = corpus_set self._c = 2 ** 7 self._model = None @property def c(self): return self._c @c.setter def c(self, cc): self._c = cc def reset_model(self): self._model = None def words(self): return self._corpus_set.words def classify(self, string): if self._model is None: self._model = self.fit_model() prediction = self._model.predict(self._corpus_set.feature_vector(string)) return self.present_answer(prediction) def fit_model(self): self._corpus_set.calculate_sparse_vectors() y_vec = self._corpus_set.yes x_mat = self._corpus_set.xes clf = svm.SVC(C=self.c, cache_size=1000, gamma=1.0 / len(y_vec), kernel='linear', tol=0.001) clf.fit(x_mat, y_vec) return clf
Up until this point we have discussed how to build the model but not about how to tune or verify the model. This is where a confusion matrix, precision, recall, and sen‐ sitivity analysis come into play.
Aggregating Sentiment Now that we have a model that calculates sentiment from text, there’s an additional issue of how to take multiple tickets per customer and map them to one measure of sentiment. There are a few ways of doing this:
Aggregating Sentiment
|
125
• Mode • Average (which would yield a score between –1 and 1) • Exponential moving average Each has benefits and downsides, so to explain the differences, let’s take an example of a few customers with different sentiments (Table 7-1). Table 7-1. Customer sentiment over time Sequence number Alice Bob Terry 1 1 –1 1 2
1
–1
1
3
1
–1
1
4
1
–1
–1
5
1
–1
–1
6
1
–1
1
7
–1
–1
1
8
–1
1
1
9
–1
1
1
10
–1
1
1
In general you can expect customers to change their minds over time. Alice was posi‐ tive to begin with but became negative in her sentiment. Bob was negative in the beginning but became positive towards the end, and Terry was mostly positive but had some negative sentiment in there. This brings up an interesting implementation detail. If we map these data to either a mode or average, then we will weight heavily things that are irrelevant. Alice is unhappy right now, while Bob is happy right now. Mode and average are both fast implementations but there is another method called exponential weighted moving average or EWMA for short.
Exponentially Weighted Moving Average Exponential moving averages are used heavily in finance since they weight recent data much heavier than old data. Things change quickly in finance and people can change as well. Unlike a normal average, this aims to change the weights from 1⁄N to some function that is based on a free parameter α, which tunes how much weight to give to the past.
126
| Chapter 7: Support Vector Machines
So instead of the formula for a simple average being: Yt + 1 =
Y0 + Y1 + ⋯ + Yt t
we would use the formula: Y t + 1 = α Y t + 1 − α Y t − 1 + 1 − α 2Y t − 2 + 1 − α 3Y t − 3 + ⋯
This can be transformed into a recursive formula: Y t + 1 = αY t + 1 − α Y t
Getting back to our original question on how to implement this let’s look at the mode, average, and EWMA together (Table 7-2). Table 7-2. Mode, average, and EWMA compared Name Mode Average EWMA (α = 0.94) Alice 1 0.2 –0.99997408 Bob
–1
–0.4
0.999568
Terry
1
0.6
0.99999845
As you can see EWMA maps our customers much better than a plain average or mode does. Alice is negative right now, Bob is positive now, and Terry has always been mostly positive.
Mapping Sentiment to Bottom Line We’ve been able to build a model that takes textual data and splits it into two senti‐ ment categories, either positive or negative. This is great! But it doesn’t quite solve our problem, which originally was determining whether our customers were unhappy or not. There is a certain amount of bias that one needs to avoid here: just because we have been able to map sentiment successfully into a given piece of text doesn’t mean that we can tell whether the customer is happy or not. Causation isn’t correlation, as they say, and vice versa. But what can we do instead?
Mapping Sentiment to Bottom Line
|
127
We can learn from this and understand our customers better, and also feed this data into other important algorithms, such as whether sentiment of text is correlated with more value from the customer or not (e.g., Figure 7-9).
Figure 7-9. Generally speaking, the more complaints there are, the less happiness there is This information is useful to running a business and improves our understanding of the data.
Conclusion The SVM algorithm is very well suited to classifying two separable classes. It can be modified to separate more than two classes and doesn’t suffer from the curse of dimensionality that KNN does. This chapter taught you how SVM can be used to sep‐ arate happy and unhappy customers, as well as how to assign sentiment to movie data. But more importantly, we’ve thought about how to go about testing our intuition of whether happy customers yield more money for our business.
128
| Chapter 7: Support Vector Machines
CHAPTER 8
Neural Networks
Humans are amazing pattern matchers. When we come out of the womb, we are able to make sense of the surrounding chaos until we have learned how to operate effec‐ tively in the world. This of course has to do with our upbringing, our environment, but most importantly our brain. Your brain contains roughly 86 billion neurons that talk to one another over a net‐ work of synapses. These neurons are able to control your body functions as well as form thoughts, memories, and mental models. Each one of these neurons acts as part of a computer network, taking inputs and sending outputs to other neurons, all com‐ municating in orchestrated fashion. Mathemeticians decided a long time ago it would be interesting to try to piece together mathematical representations of our brains, called neural networks. While the original research is over 60 years old, many of the techniques conceived back then still apply today and can be used to build models to tricky to compute problems. In this chapter we will discuss neural networks in depth. We’ll cover: • Threshold logic, or how to build a Boolean function • Neural networks as chaotic Boolean functions • How to construct a feed-forward neural net • Testing strategies for neural networks through gradient descent • An example of classifying the language of handwritten text
129
What Is a Neural Network? In a lot of ways neural networks are the perfect machine learning construct. They are a way of mapping inputs to a general output (see Figure 8-1).
Figure 8-1. Neural networks: the perfect black box What is great about neural networks is that, unlike perceptrons, they can be used to model complexity. So for instance if we have three inputs and one output we could arbitrarily set an interior complexity to 2 or 3 just based on our domain knowledge of the problem.
History of Neural Nets neu‐ rons has the ability to recognize patterns and learn from previous data. For instance, if a child is shown a picture of eight dogs, she starts to understand what a dog looks like. This research was expanded to include a more artificial bent when Warren McCul‐ loch and Walter Pitts invented threshold logic. Threshold logic combines binary information to determine logical truth. They suggested using something called a step function, which attached a threshold to either accept or reject a summation of previ‐ ous information. After many years of research, neural networks and threshold logic were combined to form what we call an artificial neural network.
Boolean Logic As programmers, we’re constantly dealing with Boolean functions, which return either yes or no (true or false). Another way of thinking about Boolean data is to encode yes or no with binary bits (0 for false, 1 for true).
130
|
Chapter 8: Neural Networks
This is such a common occurrence that there already exist many functions that deal with Boolean data. Functions such as OR, AND, NAND, NOR, and NOT are all Boolean func‐ tions. They take in two inputs that are true or false and output something that is true or false. These have been used for great advances in the electronics community through digi‐ tal logic gates and can be composed together to solve many problems. But how would we go about constructing something like this? A simple example of modeling the OR function would be the following: OR a, b = min 1, a + b
Perceptrons Perceptrons take the idea of Boolean logic even further to include more fuzzy logic. They usually involve returning a value based on a threshold being met or not. Let’s say that you’re a teacher and you wish to assign pass/fail grades to your students at the end of the quarter. Obviously you need to come up with a way of cutting off the people who failed from the ones who didn’t. This can be quite subjective but usually follows a general procedure of: def threshold(x): if sum(weights * x) + b > 0.5: return 1 else: return 0
x is a vector of all the grades you collected the entire quarter and weights is a vector of weightings. For instance, you might want to weight the final grade higher. b is just a freebie to the students for showing up.
Using such a simple formula we could traverse the optimal weightings by determin‐ ing a priori how many people we’d like to fail. Let’s say we have 100 students and only want to fail the bottom 10%. This goal is something we can actually code.
How to Construct Feed-Forward Neural Nets There are many different kinds of neural networks, but this chapter will focus on feed-forward networks and recurrent networks. What makes neural networks special is their use of a hidden layer of weighted func‐ tions called neurons, with which you can effectively build a network that maps a lot of other functions (Figure 8-2). Without a hidden layer of functions, neural networks would be just a set of simple weighted functions.
Perceptrons
|
131
Figure 8-2. A feed-forward network Neural networks are denoted by the number of neurons per layer. For example, if we have 20 neurons in our input layer, 10 in one hidden layer, and 5 in an output layer, it would be a 20-10-5 network. If there is more than one hidden layer, then we would denote it as, say, 20-7-7-5 (the two middle 7s are layers with 7 nodes apiece). To summarize, then, neural networks comprise the following parts: • The input layer • The hidden layer(s) • Neurons • The output layer • The training algorithm Next, I’ll explain what each of these parts does and how it works.
Input Layer The input layer, shown in Figure 8-3, is the entry point of a neural network. It is the entry point for the inputs you are giving to the model. There are no neurons in this layer because its main purpose is to serve as a conduit to the hidden layer(s). The input type is important, as neural networks work with only two types: symmetric or standard.
132
| Chapter 8: Neural Networks
Figure 8-3. Input layer of neural network When training a neural network, we have observations and inputs. Taking the simple example of an exclusive OR (also known as XOR), we have the truth table shown in Table 8-1. Table 8-1. XOR truth A B XOR(A,B) 0 0 0 0 1 1 1 0 1 1 1 0
Another way of representing XOR is to look at a Venn diagram (Figure 8-4). Given two sets of data, the shaded area shows the XOR area. Notice that the middle is empty.
Figure 8-4. XOR function in a Venn diagram (Source: Wikimedia) In this case, we have four observations and two inputs, which could either be true or false. Neural networks do not work off of true or false, though, and knowing how to
How to Construct Feed-Forward Neural Nets
|
133
code the input is key. We’ll need to translate these to either standard or symmetric inputs.
Standard inputs The standard range for input values is between 0 and 1. In our previous XOR example, we would code true as 1 and false as 0. This style of input has one downside: if your data is sparse, meaning full of 0s, it can skew results. Having a data set with lots of 0s means we risk the model breaking down. Only use standard inputs if you know that there isn’t sparse data.
Symmetric inputs Symmetric inputs avoid the issue with 0s. These are between –1 and 1. In our preced‐ ing example, –1 would be false, and 1 would be true. This kind of input has the bene‐ fit of our model not breaking down because of the zeroing-out effect. In addition to that, it puts less emphasis on the middle of a distribution of inputs. If we introduced a maybe into the XOR calculation, we could map that as 0 and ignore it. Inputs can be used in either the symmetric or standard format but need to be marked as such, because the way we calculate the output of neurons depends on this.
Hidden Layers Without hidden layers, neural networks would be a set of weighted linear combina‐ tions. In other words, hidden layers permit neural networks to model nonlinear data (Figure 8-5).
Figure 8-5. The hidden layer of a network Each hidden layer contains a set of neurons (Figure 8-6), which then pass to the out‐ put layer.
134
|
Chapter 8: Neural Networks
Figure 8-6. The neurons of the network
Neurons Neurons are weighted linear combinations that are wrapped in an activation func‐ tion. The weighted linear combination (or sum) is a way of aggregating all of the pre‐ vious neurons’ data into one output for the next layer to consume as input. Activation functions, shown in Figures 8-7 through 8-11, serve as a way to normalize data so it’s either symmetric or standard. As a network is feeding information forward, it is aggregating previous inputs into weighted sums. We take the value y and compute the activated value based on an activation function.
Figure 8-7. A neuron is a summation of previous inputs
How to Construct Feed-Forward Neural Nets
|
135
Activation Functions As mentioned, activation functions, some of which are listed in Table 8-2, serve as a way to normalize data between either the standard or symmetric ranges. They also are differentiable, and need to be, because of how we find weights in a training algorithm. Table 8-2. Activation functions Name Sigmoid
Standard
Symmetric
1 −2 1 + e · sum
2 −1 −2 1 + e · sum
Cosine
cos sum + 0.5 2 sin sum + 0.5 2 1 2 sum e 0 . 5 · sum + 0.5 1 + ∣ sum ∣
cos sum
Sine Gaussian Elliott Linear
sin sum 2 −1 2 sum e sum 1 + ∣ sum ∣
sum > 1 ? 1 : (sum < 0: sum) sum > 1 ? 1 : (sum < –1 ? –1 : sum)
Threshold sum < 0 ? 0 : 1
sum < 0 ? –1 : 1
The big advantage of using activation functions is that they serve as a way of buffer‐ ing incoming values at each layer. This is useful because neural networks have a way of finding patterns and forgetting about the noise. There are two main categories for activation functions: sloped or periodic. In most cases, the sloped activation functions (shown in Figures 8-8 and 8-10) are a suitable default choice. The periodic functions (shown in Figures 8-9 and 8-11) are used for modeling data with lots of noise. They generally take the form of either a sine or cosine function.
136
| Chapter 8: Neural Networks
Figure 8-8. Symmetric sloped activation functions
How to Construct Feed-Forward Neural Nets
|
137
Figure 8-9. Symmetric periodic activation functions
138
| Chapter 8: Neural Networks
Figure 8-10. Standard sloped activation functions
How to Construct Feed-Forward Neural Nets
|
139
Figure 8-11. Standard periodic activation functions Sigmoid is the default function to be used with neurons because of its ability to smooth out the decision. Elliott is a sigmoidal function that is quicker to compute, so it’s the choice I make. Cosine and sine waves are used when you are mapping some‐ thing that has a random-looking process associated with it. In most cases, these trigo‐ nometric functions aren’t as useful to our problems. Neurons are where all of the work is done. They are a weighted sum of previous inputs put through an activation function that either bounds it to 0 to 1 or –1 to 1. In the case of a neuron where we have two inputs before it, the function for the neuron would be y = ϕ w1x˙1 + w2x˙2 , where ϕ is an activation function like sigmoid, and wi are weights determined by a training algorithm.
140
|
Chapter 8: Neural Networks
Output Layer The output layer is similar to the input layer except that it has neurons in it. This is where the data comes out of the model. Just as with the input layer, this data will either be symmetric or standard. Output layers decide how many neurons are output, which is a function of what we’re modeling (see Figure 8-12). In the case of a function that outputs whether a stop light is red, green, or yellow, we’d have three outputs (one for each color). Each of those outputs would contain an approximation for what we want.
Figure 8-12. The output layer of the network
Training Algorithms As mentioned, the weights for each neuron came from a training algorithm. There are many such algorithms, but the most common are: • Back propagation • QuickProp • RProp All of these algorithms find optimal weights for each neuron. They do so through iterations, also known as epochs. For each epoch, a training algorithm goes through the entire neural network and compares it against what is expected. At this point, it learns from past miscalculations. These algorithms have one thing in common: they are trying to find the optimal solu‐ tion in a convex error surface. You can think of convex error surface as a bowl with a minimum value in it. Imagine that you are at the top of a hill and want to make it to a valley, but the valley is full of trees. You can’t see much in front of you, but you know that you want to get to the valley. You would do so by proceeding based on local inputs and guessing where you want to go next. This is known as the gradient descent
How to Construct Feed-Forward Neural Nets
|
141
algorithm (i.e., determining minimum error by walking down a valley) and it is illus‐ trated in Figure 8-13. The training algorithms do the same thing; they are looking to minimize error by using local information.
Figure 8-13. Gradient descent down a valley
The Delta Rule While we could solve a massive amount of equations, it would be faster to iterate. Instead of trying to calculate the derivative of the error function with respect to the weight, we calculate a change in weight for each neuron’s weights. This is known as the delta rule, and it is as follows: Equation 8-1. Delta rule Δw ji = α t j − ϕ h j ϕ′ h j xi
This states that the change in weight for neuron j’s weight number i is: alpha * (expected - calculated) * derivative_of_calculated * input_at_i
α is the learning rate and is a small constant. This initial idea, though, is what powers the idea behind the back propagation algorithm, or the general case of the delta rule.
Back Propagation Back propagation is the simplest of the three algorithms that determine the weight of a neuron. You define error as (expected * actual)2 where expected is the expected out‐
142
|
Chapter 8: Neural Networks
put and actual is the calculated number from the neurons. We want to find where the derivative of that equals 0, which is the minimum. Equation 8-2. Back propagation Δw t = − α t − y ϕ′xi + �Δw t − 1
ϵ is the momentum factor and propels previous weight changes into our current weight change, whereas α is the learning rate. Back propagation has the disadvantage of taking many epochs to calculate. Up until 1988, researchers were struggling to train simple neural networks. Their research on how to improve this led to a whole new algorithm called QuickProp.
QuickProp Scott Fahlman introduced the QuickProp algorithm after he studied how to improve back propagation. He asserted that back propagation took too long to converge to a solution. He proposed that we instead take the biggest steps without overstepping the solution. Fahlman determined that there are two ways to improve back propagation: making the momentum and learning rate dynamic, and making use of a second derivative of the error with respect to each weight. In the first case, you could better optimize for each weight, and in the second case, you could utilize Newton’s method of approxi‐ mating functions. With QuickProp, the main difference from back propagation is that you keep a copy of the error derivative computed during the previous epoch, along with the difference between the current and previous values of this weight. To calculate a weight change at time t, use the following function: Δw t =
St S t−1 −S t
Δw t − 1
This carries the risk of changing the weights too much, so there is a new parameter for maximum growth. No weight is allowed to be greater in magnitude than the max‐ imum growth rate multiplied by the previous step for that weight.
RProp RProp is a good algorithm because it converges quicker. It was introduced by Martin Riedmiller in the 1990s and has been improved since then. It converges on a solution quickly due to its insight that the algorithm can update the weights many times
How to Construct Feed-Forward Neural Nets
|
143
through an epoch. Instead of calculating weight changes based on a formula, it uses only the sign for change as well as an increase factor and decrease factor. To see what this algorithm looks like in code, we need to define a few constants (or defaults). These are a way to make sure the algorithm doesn’t operate forever or become volatile. These defaults were taken from the FANN library. The basic algorithm is easier to explain in Python instead of writing out the partial derivatives. For ease of reading, note that I am not calculating the error gradients (i.e., the change in error with respect to that specific weight term). This code gives you an idea of how the RProp algorithm works using just pure Python code: import numpy as np import random neurons = 3 inputs = 4 delta_zero = 0.1 increase_factor = 1.2 decrease_factor = 0.5 delta_max = 50.0 delta_min = 0 max_epoch = 100 deltas = np.zeros((inputs, neurons)) last_gradient = np.zeros((inputs, neurons))
def sign(x): if x > 0: return 1 elif x < 0: return -1 else: return 0 weights = [random.uniform(-1, 1) for _ in range(inputs)] for j in range(max_epoch): for i, weight in enumerate(weights): # Current gradient is derived from the change of each value at each layer # Do note that we haven't derived current_gradient since that would detract # from the example gradient_momentum = last_gradient[i][j] * current_gradient[i][j] if gradient_momentum > 0: deltas[i][j] = min(deltas[i][j] * increase_factor, delta_max) change_weight = -sign(current_gradient[i][j]) * deltas[i][j] last_gradient[i][j] = current_gradient[i][j] elif gradient_momentum < 0: deltas[i][j] = max(deltas[i][j] * decrease_factor, delta_min)
144
|
Chapter 8: Neural Networks
last_gradient[i][j] = 0 else: change_weight = -sign(current_gradient[i][j])* deltas[i][j] weights[i] = weights[i] + change_weight last_gradient[i][j] = current_gradient[i][j]
These are the fundamentals you need to understand to be able to build a neural net‐ work. Next, we’ll talk about how to do so, and what decisions we must make to build an effective one.
Building Neural Networks Before you begin building a neural network, you must answer the following ques‐ tions: • How many hidden layers should you use? • How many neurons per layer? • What is your tolerance for error?
How Many Hidden Layers? As noted earlier in this chapter, what makes neural networks unique is their usage of hidden layers. If you took out hidden layers, you’d have a linear combination prob‐ lem. You aren’t bound to use any number of hidden layers, but there are three heuristics that help: • Do not use more than two hidden layers; otherwise, you might overfit the data. With too many layers, the network starts to memorize the training data. Instead, we want it to find patterns. • One hidden layer will do the job of approximating a continuous mapping. This is the common case. Most neural networks have only one hidden layer in them. • Two hidden layers will be able to push past a continuous mapping. This is an uncommon case, but if you know that you don’t have a continuous mapping, you can use two hidden layers. There is no steadfast rule holding you to these heuristics for picking the number of hidden layers. It comes down to trying to minimize the risk of overfitting or underfit‐ ting your data.
Building Neural Networks
|
145
How Many Neurons for Each Layer? Neural networks are excellent aggregators and terrible expanders. Neurons them‐ selves are weighted sums of previous neurons, so they have a tendency to not expand out as well as they combine. If you think about it, a hidden layer of 2 that goes to an output layer of 30 would mean that for each output neuron, there would be two inputs. There just isn’t enough entropy or data to make a well-fitted model. This idea of emphasizing aggregation over expansion leads us to the next set of heu‐ ristics: • The number of hidden neurons should be between the number of inputs and the number of neurons at the output layer. • The number of hidden neurons should be two-thirds the size of the input layer, plus the size of the output layer. • The number of hidden neurons should be less than twice the size of the input layer. This comes down to trial and error, though, as the number of hidden neurons will influence how well the model cross-validates, as well as the convergence on a solu‐ tion. This is just a starting point.
Tolerance for Error and Max Epochs The tolerance for error gives us a time to stop training. We will never get to a perfect solution but rather converge on one. If you want an algorithm that performs well, then the error rate might be low, like 0.01%. But in most cases, that will take a long time to train due to its intolerance for error. Many start with an error tolerance of 1%. Through cross-validation, this might need to be tightened even more. In neural network parlance, the tolerance is internal, is measured as a mean squared error, and defines a stopping place for the network. Neural networks are trained over epochs, and this is set before the training algorithm even starts. If an algorithm is taking 10,000 iterations to get to a solution, then there might be a high risk for overtraining and creating a sensitive network. A starting point for training is 1,000 epochs or iterations to train over. This way, you can model some complexity without getting too carried away. Both max epochs and maximum error define our converging points. They serve as a way to signal when the training algorithm can stop and yield the neural network. At this point, we’ve learned enough to get our hands dirty and try a real-world example.
146
|
Chapter 8: Neural Networks
Using a Neural Network to Classify a Language Characters used in a language have a direct correlation with the language itself. Man‐ darin is recognizable due to its characters, because each character means a specific word. This correlation is true of many Latin-based languages, but in regards to letter frequency. If we look at the difference of “The quick brown fox jumped over the lazy dog” in English and its German equivalent, “Der schnelle braune Fuchs sprang über den fau‐ len Hund,” we’d get the frequency chart shown in Table 8-3. Table 8-3. Difference in letter frequency between English and German sentence English
a b c d e f g h i j k l m n o p q r s t u v w x y z ü 1 1 1 2 4 1 1 2 1 1 1 1 1 1 4 1 1 2 0 2 2 1 1 1 1 1 0
German
3 2 2 3 7 2 1 3 0 0 0 3 0
6 0 1 0 4 2 0 4 0 0
0 0 1 1
Difference 2 1 1 1 3 1 0 1 1 1 1 2 1
5 4 0 1 2 2 2 2 1 1
1 1 0 1
There is a subtle difference between German and English. German uses quite a few more Ns, whereas English uses a lot of Os. If we wanted to expand this to a few more European languages, how would we do that? More specifically, how can we build a model to classify sentences written in English, Polish, German, Finnish, Swedish, or Norwegian? In this case, we’ll build a simple model to predict a language based on the frequency of the characters in the sentences. But before we start, we need to have some data. For that, we’ll use the most translated book in the world: the Bible. Let’s extract all the chapters out of Matthew and Acts. The approach we will take is to extract all the sentences out of these text files and cre‐ ate vectors of frequency normalized between 0 and 1. From that, we will train a net‐ work that will take those inputs and then match them to a vector of 6. The vector of 6 is defined as the index of the language equaling 1. If the language we are using to train is index 3, the vector would look like [0,0,0,1,0,0] (zero-based indexing).
Setup Notes All of the code we’re using for this example can be found on GitHub. Python is constantly changing, so the README file is the best place to get up to speed on running the examples.
Coding and Testing Design The overall approach we will be taking is to write two classes to parse the Bible verses and train a neural network: Using a Neural Network to Classify a Language
|
147
Language
This class will parse the Bible verses and calculate a frequency of letter occurences. Network
This will take language training data and build a network that calculates the most likely language attached to new incoming text (see Figure 8-14).
Figure 8-14. Class diagram for classifying text to languages The testing for our neural network will focus primarily on testing the data transmis‐ sion into theanets, which we will test using a cross-validation test. We will set a thres‐ hold for acceptance as a unit test and use that to test.
The Data The data was grabbed from biblegateway.com. To have a good list of data I’ve down‐ loaded passages in English, Polish, German, Finnish, Swedish, and Norwegian. They are from the books Acts and Matthew. The script I used to download the data was written in Ruby and I felt that it wouldn’t be helpful to put it in here translated. Feel free to check out the script at if you’d like.
Writing the Seam Test for Language To take our training data, we need to build a class to parse that and interface with our neural network. For that, we will use the class name Language. Its purpose is to take a file of text in a given language and load it into a distribution of character frequencies. When needed, Language will output a vector of these characters, all summing up to 1. All of our inputs will be between 0 and 1. Our parameters are: • We want to make sure that our data is correct and sums to 1. • We don’t want characters like UTF-8 spaces or punctuation entering our data. 148
|
Chapter 8: Neural Networks
• We want to downcase all characters. A should be translated as a. Ü should also be ü. This helps us to make sure that our Language class, which takes a text file and outputs an array of hashes, is correct: # coding=utf-8 from StringIO import StringIO import string import unittest from language import Language
class TestLanguage(unittest.TestCase): def setUp(self): self.language_data = u''' abcdefghijklmnopqrstuvwxyz. ABCDEFGHIJKLMNOPQRSTUVWXYZ. \u00A0. [email protected]#$%^&*()_+'?[]“”‘’—»«›‹–„/. ïëéüòèöÄÖßÜøæåÅØóąłżŻśęńŚćźŁ. ''' self.special_characters = self.language_data.split("\n")[-1].strip() self.language_io = StringIO(self.language_data) self.language = Language(self.language_io, 'English') def test_keys(self): """has the proper keys for each vector""" self.assertListEqual(list(string.ascii_lowercase), sorted(self.language.vectors[0].keys())) self.assertListEqual(list(string.ascii_lowercase), sorted(self.language.vectors[1].keys())) special_chars = sorted(set(u'ïëéüòèöäößüøæååóąłżżśęńśćź')) self.assertListEqual(special_chars, sorted(self.language.vectors[-1].keys())) def test_values(self): """sums to 1 for all vectors""" for vector in self.language.vectors: self.assertEqual(1, sum(vector.values())) def test_character_set(self): """returns characters that is a unique set of characters used""" chars = list(string.ascii_lowercase) chars += list(set(u'ïëéüòèöäößüøæååóąłżżśęńśćź')) self.assertListEqual(sorted(chars), sorted(self.language.characters))
Using a Neural Network to Classify a Language
|
149
At this point, we have not written Language, and all of our tests fail. For the first goal, let’s get something that counts all the alpha characters and stops on a sentence. That would look like this: # coding=UTF-8 from tokenizer import Tokenizer class Language: def __init__(self, language_io, name): self.name = name self.vectors, self.characters = Tokenizer.tokenize(language_io) # coding=utf-8 import collections from fractions import Fraction
class Tokenizer(object): punctuation = list(u'[email protected]#$%^&*()_+\'[]“”‘’—»«›‹–„/') spaces = list(u' \u00A0\n') stop_characters = list('.?!') @classmethod def tokenize(cls, io): vectors = [] dist = collections.defaultdict(int) characters = set() for char in io.read(): if char in cls.stop_characters: if len(dist) > 0: vectors.append(cls.normalize(dist)) dist = collections.defaultdict(int) elif char not in cls.spaces and char not in cls.punctuation: character = char.lower() characters.add(character) dist[character] += 1 if len(dist) > 0: vectors.append(cls.normalize(dist)) return vectors, characters
Now we have something that should work. Do note that there is the Unicode space here, which is denoted as \u00a0. Now we have a new problem, though, which is that the data does not add up to 1. We will introduce a new function, normalize, which takes a hash of values and applies the function x/sum(x) to all values. Note that I used Fraction, which increases the reliability of calculations and doesn’t do floating-point arithmetic until needed:
150
|
Chapter 8: Neural Networks
class Tokenizer: # tokenize @classmethod def normalize(cls, dist): sum_values = sum(dist.values()) return {k: Fraction(v, sum_values) for k, v in dist.iteritems()}
Everything is green and things look great for Language. We have full test coverage on a class that will be used to interface with our neural network. Now we can move on to building a Network class.
Cross-Validating Our Way to a Network Class I used the Bible to find training data for our language classification because it is the most translated book in history. For the data, I decided to download Matthew and Acts in English, Finnish, German, Norwegian, Polish, and Swedish. With this natural divide between Acts and Matthew, we can define 12 tests of a model trained with Acts and see how it compares to Matthew’s data, and vice versa. The code looks like: # coding=utf-8 from StringIO import StringIO import codecs from glob import glob import os import re import unittest from nose_parameterized import parameterized from language import Language from network import Network
def language_name(file_name): basename, ext = os.path.splitext(os.path.basename(file_name)) return basename.split('_')[0]
def load_glob(pattern): result = [] for file_name in glob(pattern): result.append(Language(codecs.open(file_name, encoding='utf-8'), language_name(file_name))) return result
class TestNetwork(unittest.TestCase): matthew_languages = load_glob('data/*_0.txt')
Using a Neural Network to Classify a Language
|
151
acts_languages = load_glob('data/*_1.txt') matthew_verses = Network(matthew_languages) matthew_verses.train() acts_verses = Network(acts_languages) acts_verses.train() languages = 'English Finnish German Norwegian Polish Swedish'.split() @parameterized.expand(languages) def test_accuracy(self, lang): """Trains and cross-validates with an error of 5%""" print 'Test for %s' % lang self.compare(self.matthew_verses, './data/%s_1.txt' % lang) self.compare(self.acts_verses, './data/%s_0.txt' % lang) def compare(self, network, file_name): misses = 0.0 hits = 0.0 with codecs.open(file_name, encoding='utf-8') as f: text = f.read() for sentence in re.split(r'[\.!\?]', text): language = network.predict(StringIO(sentence)) if language is None: continue if language.name == language_name(file_name): hits += 1 else: misses += 1 total = misses + hits self.assertLess(misses, 0.05 * total, msg='%s has failed with a miss rate of %f' % (file_name, misses / total))
There is a folder called data in the root of the project that contains files in the form Language_0.txt and Language_1.txt where Language is the language name, _0 is the index mapping to Matthew, and _1 is the index mapping to Acts. It takes a while to train a neural network, depending, so that is why we are only crossvalidating with two folds. At this point, we have 12 tests defined. Of course, nothing will work now because we haven’t written the Network class. To start out the Network class we define an initial class as taking an array of Language classes. Secondly, because we don’t want to write all the neural network by hand, we’re using a library called PyBrain. Our main goal initially is to get a neural network to accept and train. But now we have an important decision to make: which neural net library should we use? There are many out there, and of course we could build our own. Probably the best one out there right now is theanets, which integrates nicely with NumPy and can actually be utilized for deep learning, autoencoding, and much more than just
152
| Chapter 8: Neural Networks
straight feed-forward networks. We will use that, although you could use other libra‐ ries like PyBrain. import numpy as np import theanets from tokenizer import Tokenizer class Network(object): def __init__(self, languages, error=0.005): self._trainer = None self._net = None self.languages = languages self.error = error self.inputs = set() for language in languages: self.inputs.update(language.characters) self.inputs = sorted(self.inputs) def _build_trainer(self): inputs = [] desired_outputs = [] for language_index, language in enumerate(self.languages): for vector in language.vectors: inputs.append(self._code(vector)) desired_outputs.append(language_index) inputs = np.array(inputs, dtype=np.float32) desired_outputs = np.array(desired_outputs, dtype=np.int32) self._trainer = (inputs, desired_outputs) def _code(self, vector): result = np.zeros(len(self.inputs)) for char, freq in vector.iteritems(): if char in self.inputs: result[self.inputs.index(char)] = float(freq) return result def _build_ann(self): hidden_neurons = 2 * (len(self.inputs) + len(self.languages)) / 3 self._net = theanets.Classifier([len(self.inputs), {'size': hidden_neurons, 'activation': 'tanh'}, len(self.languages)])
Now that we have the proper inputs and the proper outputs, the model is set up and we should be able to run the whole crossvalidation.py. But, of course, there is an error because we cannot run new data against the network. To address this, we need to build a function called #run. At this point, we have something that works and looks like this: class Network: def train(self): self._build_trainer() self._build_ann()
Using a Neural Network to Classify a Language
|
153
self._net.train(self._trainer, learning_rate=0.01) def predict(self, sentence): if self._net is None or self._trainer is None: raise Exception('Must train first') vectors, characters = Tokenizer.tokenize(sentence) if len(vectors) == 0: return None input = np.array(self._code(vectors[0]), ndmin=2, dtype=np.float32) result = self._net.predict(input) return self.languages[result[0]]
Tuning the Neural Network At this point there’s quite a few optimizations we could make. Also you could play around with different hidden-layer activation functions like tanh, softmax, or various others. I’ll leave further tuning to you as an exercise in playing around with what works and what does not. You can try many different activation functions, as well as internal rates of decay or errors. The takeaway here is that with an initial test to base accuracy against, you can try many different avenues.
Precision and Recall for Neural Networks Going a step further, when we deploy this neural network code to a production envi‐ ronment, we need to close the information loop by introducing a precision and recall metric to track over time. This metric will be calculated from user input. We can measure precision and recall by asking in our user interface whether our pre‐ diction was correct. From this text, we can capture the blurb and the correct classifi‐ cation, and feed that back into our model the next time we train. To learn more about monitoring precision and recall, see Chapter 9. What we need to monitor the performance of this neural network in production is a metric for how many times a classification was run, as well as how many times it was wrong.
Wrap-Up of Example The neural networks algorithm is a fun way of mapping information and learning through iterations, and it works well for our case of mapping sentences to languages. Loading this code in an IPython session, I had fun trying out phrases like “meep moop,” which is classified as Norwegian! Feel free to play with the code.
154
|
Chapter 8: Neural Networks
Conclusion The neural networks algorithm is a powerful tool in a machine learning toolkit. Neu‐ ral networks serve as a way of mapping previous observations through a functional model. While they are touted as black box models, they can be understood with a lit‐ tle bit of mathematics and illustration. You can use neural networks for many things, like mapping letter frequencies to languages or handwriting detection. There are many problems being worked on right now with regards to this algorithm, and sev‐ eral in-depth books have been written on the topic as well. Anything written by Geof‐ frey Hinton is worth a read, namely Unsupervised Learning: Foundations of Neural Computation. This chapter introduced neural networks as an artificial version of our brain and explained how they work by summing up inputs using a weighted function. These weighted functions were then normalized within a certain range. Many algorithms exist to train these weight values, but the most prevalent is the RProp algorithm. Last, we summed it all up with a practical example of mapping sentences to languages.
Using a Neural Network to Classify a Language
|
155
CHAPTER 9
Clustering
Up until this point we have been solving problems of fitting a function to a set of data. For instance, given previously observed mushroom attributes and edibleness, how would we classify new mushrooms? Or, given a neighborhood, what should the house value be? This chapter talks about a completely different learning problem: clustering. This is a subset of unsupervised learning methods and is useful in practice for understanding data at its core. If you’ve been to a library you’ve seen clustering at work. The Dewey Decimal system is a form of clustering. Dewey came up with a system that attached numbers of increasing granularity to categories of books, and it revolutionized libraries. We will talk about what it means to be unsupervised and what power exists in that, as well as two clustering algorithms: K-Means and expectation maximization (EM) clus‐ tering. We will also address two other issues associated with clustering and unsuper‐ vised learning: • How do you test a clustering algorithm? • The impossibility theorem.
Studying Data Without Any Bias If I were to give you an Excel spreadsheet full of data, and instead of giving you any idea as to what I’m looking for, just asked you to tell me something, what could you tell me? That is what unsupervised learning aims to do: study what the data is about.
157
A more formal definition would be to think of unsupervised learning as finding the best function f such that f(x) = x. At first glance, wouldn’t x = x? But it’s more than just that—you can always map data onto itself—but what unsupervised learning does is define a function that describes the data. What does that mean? Unsupervised learning is trying to find a function that generalizes the data to some degree. So instead of trying to fit it to some classification or number, instead we are just fitting the function to describe the data. This is essential to understand since it gives us a glimpse as to how to test this. Let’s dive into an example.
User Cohorts Grouping people into cohorts makes a lot of business and marketing sense. For instance, your first customer is different from your ten thousandth customer or your millionth customer. This problem of defining users into cohorts is a common one. If we were able to effectively split our customers into different buckets based on behav‐ ior and time of signup, then we could better serve them by diversifying our marketing strategy. The problem is that we don’t have a predefined label for customer cohorts. To get over this problem you could look at what month and year they became a customer. But that is making a big assumption about that being the defining factor that splits customers into groups. What if time of first purchase had nothing to do with whether they were in one cohort or the other? For example, they could only have made one purchase or many. Instead, what can we learn from the data? Take a look at Table 9-1. Let’s say we know when they signed up, how much they’ve spent, and what their favorite color is. Assume also that over the last two years we’ve only had 10 users sign up (well, I hope you have more than that over two years, but let’s keep this simple). Table 9-1. Data collected over 2 years User ID Signup date Money spent Favorite color 1 Jan 14 40 N/A 2
Nov 3
50
Orange
3
Jan 30
53
Green
4
Oct 3
100
Magenta
5
Feb 1
0
Cyan
6
Dec 31
0
Purple
7
Sep 3
0
Mauve
158
|
Chapter 9: Clustering
User ID Signup date Money spent Favorite color 8 Dec 31 0 Yellow 9
Jan 13
14
Blue
10
Jan 1
50
Beige
Given these data, we want to learn a function that describes what we have. Looking at these rows, you notice that the favorite colors are irrelevant data. There is no infor‐ mation as to whether users should be in a cohort. That leaves us with Money spent and Signup date. There seems to be a group of users who spend money, and one of those who don’t. That is useful information. In the Signup date column you’ll notice that there are a lot of users who sign up around the beginning of the year and end of the previous one, as well as around September, October, or November. Now we have a choice: whether we want to find the gist of this data in something compact or find a new mapping of this data onto a transformation. Remember the discussion of kernel tricks in Chapter 7? This is all we’re doing—mapping this data onto a new dimension. For the purposes of this chapter we will delve into a new map‐ ping technique: in Chapter 10, on data extraction and improvement, we’ll delve into compaction of data. Let’s imagine that we have 10 users in our database and have information on when they signed up, and how much money they spent. Our marketing team has assigned them manually to cohorts (Table 9-2). Table 9-2. Manual cohort assignment to the original data set User ID Signup date (days to Jan 1) Money spent Cohort 1 Jan 14 (13) 40 1 2
Nov 3 (59)
50
2
3
Jan 30 (29)
53
1
4
Oct 3 (90)
100
2
5
Feb 1 (31)
0
1
6
Dec 31 (1)
0
1
7
Sep 3 (120)
0
2
8
Dec 31 (1)
0
1
9
Jan 13 (12)
14
1
10
Jan 1 (0)
50
1
We have divided the group into two groups where seven users are in group 1, which we could call the beginning-of-the-year signups, and end-of-the-year signups are in group 2.
User Cohorts
|
159
But there’s something here that doesn’t sit well. We assigned the users to different clusters, but didn’t really test anything—what to do?
Testing Cluster Mappings Testing unsupervised methods doesn’t have a good tool such as cross-validation, con‐ fusion matrices, ROC curves, or sensitivity analysis, but they still can be tested, using one of these two methods: • Determining some a priori fitness of the unsupervised learning method • Comparing results to some sort of ground truth
Fitness of a Cluster Domain knowledge can become very useful in determining the fitness of an unsuper‐ vised model. For instance, if we want to find things that are similar, we might use some sort of distance-based metric. If instead we wish to determine independent fac‐ tors of the data, we might calculate fitness based on correlation or covariance. Possible fitness functions include: • Mean distance from centroid • Mean distance from all points in a cluster • Silhouette coefficient Mean distances from centroid, or from all points in a cluster, are almost baked into algorithms that we will be talking about such as K-Means or EM clustering, but the silhouette coefficient is an interesting take on fitness of cluster mappings.
Silhouette Coefficient The silhouette coefficient evaluates cluster performance without ground truth (i.e., data that has been provided through direct observation versus inferred observations) by looking at the relation of the average distance inside of a cluster versus the average distance to the nearest cluster (Figure 9-1).
Figure 9-1. Silhouette coefficient visually 160
|
Chapter 9: Clustering
Mathematically the metric is: s=
b−a max a, b
where a is the average distance between a sample and all other points in that cluster and b is the same sample’s mean distance to the next nearest cluster points. This coefficient ends up becoming useful because it will show fitness on a scale of –1 to 1 while not requiring ground truth.
Comparing Results to Ground Truth In practice many times machine learning involves utilizing ground truth, which is something that we can usually find through trained data sets, humans, or other means like test equipment. This data is valuable in testing our intuition and deter‐ mining how fitting our model is. Clustering can be tested using ground truth using the following means: • Rand index • Mutual information • Homogeneity • Completeness • V-measure • Fowlkes-Mallows score All of these methods can be extremely useful in determining how fit a model is. scikit-learn implements all of these and can easily be used to determine a score.
K-Means Clustering There are a lot of clustering algorithms like linkage clustering or Diana, but one of the most common is K-Means clustering. Using a predefined K, which is the number of clusters that one wants to split the data into, K-Means will find the most optimal centroids of clusters. One nice property of K-Means clustering is that the clusters will be strict, spherical in nature, and converge to a solution. In this section we will briefly talk about how K-Means clustering works.
The K-Means Algorithm The K-Means algorithm starts with a base case. Pick K random points in the data set and define them as centroids. Next, assign each point to a cluster number that is clos‐ K-Means Clustering
|
161
est to each different centroid. Now we have a clustering based on the original randomized centroid. This is not exactly what we want to end with, so we update where the centroids are using a mean of the data. At that point we repeat until the centroids no longer move (Figure 9-2).
Figure 9-2. In a lot of ways, K-Means resembles a pizza To construct K-Means clusters we need to have some sort of measurement for dis‐ tance from the center. Back in Chapter 3 we introduced a few distance metrics, such as: Manhattan distance dmanhattan x, y = ∑ni = 1 xi − yi
Euclidean distance deuclid x, y = ∑ni = 1 xi − yi
2
Minkowski distance d x, y =
162
|
∑ni = 1
∣ xi − y i ∣
Chapter 9: Clustering
1 p p
Mahalanobis distance d x, y = ∑ni = 1
xi − y i
2
s2i
For a refresher on the metrics discussed here, review K-Nearest Neighbors in Chap‐ ter 3.
Downside of K-Means Clustering One drawback of this procedure is that everything must have a hard boundary. This means that a data point can only be in one cluster and not straddle the line between two of them. K-Means also prefers spherical data since most of the time the Euclidean distance is being used. When looking at a graphic like Figure 9-3, where the data in the middle could go either direction (to a cluster on the left or right), the downsides become obvious.
EM Clustering Instead of focusing on finding a centroid and then data points that relate to it, EM clustering focuses on solving a different problem. Let’s say that you want to split your data points into either cluster 1 or 2. You want a good guess of whether the data is in either cluster but don’t care if there’s some fuzziness. Instead of an exact assignment, we really want a probability that the data point is in each cluster. Another way to think of clustering is how we interpret things like music. Classically speaking, Bach is Baroque music, Mozart is classical, and Brahms is Romantic. Using an algorithm like K-Means would probably work well for classical music, but for more modern music things aren’t that simple. For instance, jazz is extremely nuanced. Miles Davis, Sun Ra, and others really don’t fit into a categorization. They were a mix of a lot of influences. So instead of classifying music like jazz we could take a more holistic approach through EM clustering. Imagine we had a simple example where we wanted to clas‐ sify our jazz collection into either fusion or swing. It’s a simplistic model, but we could start out with the assumption that music could be either swing or fusion with a 50% chance. Notating this using math, we could say that zk = < 0.5, 0.5 >. Then if we were to run a special algorithm to determine what “Bitches Brew—Miles Davis” was in, we might find zk = < 0.9, 0.1 > or that it’s 90% fusion. Similarly if we were to run this on something like “Oleo—Sonny Rollins” we might find the opposite to be true with 95% being swing.
EM Clustering
|
163
The beauty of this kind of thinking is that in practice, data doesn’t always fit into a category. But how would an algorithm like this work if we were to write it?
Figure 9-3. This shows how clusters can actually be much softer
Algorithm The EM clustering algorithm is an iterative process to converge on a cluster mapping. It completes two steps in each iteration: expect and maximize. But what does that mean? Expectation and maximization could mean a lot.
Expectation Expectation is about updating the truth of the model and seeing how well we are mapping. In a lot of ways this is a test-driven approach to building clusters—we’re figuring out how well our model is tracking the data. Mathematically speaking, we estimate the probability vector for each row of data given its previous value. On first iteration we just assume that everything is equal (unless you have some domain knowledge you feed into the model). Given that information we calculate the log likelihood of theta in the conditional distribution between our model and the true value of the data. Notated it is:
164
|
Chapter 9: Clustering
Q θ ∥ θt = EZ ∥ X,θ logL θ; X, Z t
θ is the probability model we have assigned to rows. Z and X are the distributions for our cluster mappings and the original data points.
Maximization Just estimating the log likelihood of something doesn’t solve our problem of assigning new probabilities to the Z distribution. For that we simply take the argument max of the expectation function. Namely, we are looking for the new θ that will maximize the log likelihood: θt = arg maxθ Q θ ∥ θt
The unfortunate thing about EM clustering is that it does not necessarily converge and can falter when mapping data with singular covariances. We will delve into more of the issues related with EM clustering in “Example: Categorizing Music” on page 166. First we need to talk about one thing that all clustering algorithms share in com‐ mon: the impossibility theorem.
The Impossibility Theorem There is no such thing as free lunch and clustering is no exception. The benefit we get out of clustering to magically map data points to particular groupings comes at a cost. This was described by Jon Kleinberg, who touts it as the impossibility theorem, which states that you can never have more than two of the following when clustering: 1. Richness 2. Scale invariance 3. Consistency Richness is the notion that there exists a distance function that will yield all different types of partitions. What this means intuitively is that a clustering algorithm has the ability to create all types of mappings from data points to cluster assignments. Scale invariance is simple to understand. Imagine that you were building a rocket ship and started calculating everything in kilometers until your boss said that you need to use miles instead. There wouldn’t be a problem switching; you just need to divide by a constant on all your measurements. It is scale invariant. Basically if the numbers are all multiplied by 20, then the same cluster should happen. Consistency is more subtle. Similar to scale invariance, if we shrank the distance between points inside of a cluster and then expanded them, the cluster should yield The Impossibility Theorem
|
165
the same result. At this point you probably understand that clustering isn’t as good as many originally think. It has a lot of issues and consistency is definitely one of those that should be called out. For our purposes K-Means and EM clustering satisfy richness and scale invariance but not consistency. This fact makes testing clustering just about impossible. The only way we really can test is by anecdote and example, but that is okay for analysis purposes. In the next section we will analyze jazz music using K-Means and EM clustering.
Example: Categorizing Music Music has a deep history of recordings and composed pieces. It would take an entire degree and extensive study of musicology just to be able to effectively categorize it all. The ways we can sort music into categories is endless. Personally I sort my own record collection by artist name, but sometimes artists will perform with one another. On top of that, sometimes we can categorize based on genre. Yet what about the fact that genres are broad—such as jazz, for instance? According to the Montreux Jazz Festival, jazz is anything you can improvise over. How can we effectively build a library of music where we can divide our collection into similar pieces of work? Instead let’s approach this by using K-Means and EM clustering. This would give us a soft clustering of music pieces that we could use to build a taxonomy of music. In this section we will first determine where we will get our data from and what sort of attributes we can extract, then determine what we can validate upon. We will also discuss why clustering sounds great in theory but in practice doesn’t give us much except for clusters.
Setup Notes All of the code we’re using for this example can be found on GitHub. Python is constantly changing so the README is the best place to come up to speed on running the examples.
Gathering the Data There is a massive amount of data on music from the 1100s through today. We have MP3s, CDs, vinyl records, and written music. Without trying to classify the entire world of music, let’s determine a small subsection that we can use. Since I don’t want to engage in any copyright suits we will only use public information on albums. This would be Artist, Song Name, Genre (if available), and any other characteristics we
166
|
Chapter 9: Clustering
can find. To achieve this we have access to a plethora of information contained in Discogs.com. They offer many XML data dumps of records and songs. Also, since we’re not trying to cluster the entire data set of albums in the world, let’s just focus on jazz. Most people would agree that jazz is a genre that is hard to really classify into any category. It could be fusion, or it could be steel drums. To get our data set I downloaded metadata (year, artist, genre, etc.) for the best jazz albums (according to). The data goes back to 1940 and well into the 2000s. In total I was able to download metadata for about 1,200 unique records. All great albums! But that isn’t enough information. On top of that I annotated the information by using the Discogs API to determine the style of jazz music in each. After annotating the original data set I found that there are 128 unique styles associ‐ ated with jazz (at least according to Discogs). They range from aboriginal to vocal.
Coding Design Although this chapter uses two different algorithms (EM clustering and K-Means clustering), the code will focus on EM clustering and will follow the data flow in Figure 9-4.
Figure 9-4. EM clustering class
Example: Categorizing Music
|
167
Analyzing the Data with K-Means Like we did with KNN, we need to figure out an optimal K. Unfortunately with clus‐ tering there really isn’t much we can test with except to just see whether we split into two different clusters. But let’s say that we want to fit all of our records on a shelf and have 25 slots—similar to the IKEA bookshelf. We could run a clustering of all of our data using K = 25. Doing that would require little code because we have the AI4R gem to rely on: import csv from sklearn.cluster import KMeans data = [] artists = [] years = []]) clusters = KMeans(n_clusters=25).fit_predict(data) with open('data/clustered_kmeans.csv', 'wb') as csvfile: fieldnames = ['artist_album', 'year', 'cluster'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for i, cluster in enumerate(clusters): writer.writerow({'artist_album': artists[i], 'year': years[i], 'cluster': cluster})
That’s it! Of course clustering without looking at what this actually tells us is useless. This does split the data into 25 different categories, but what does it all mean? Looking at a graphic of year versus assigned cluster number yields interesting results (Figure 9-5).
168
|
Chapter 9: Clustering
Figure 9-5. K-Means applied to jazz albums As you can see, jazz starts out in the Big Band era pretty much in the same cluster, transitions into cool jazz, and then around 1959 it starts to go everywhere until about 1990 when things cool down a bit. What’s fascinating is how well that syncs up with jazz history. What happens when we cluster the data using EM clustering?
EM Clustering Our Data With EM clustering, remember that we are probabilistically assigning to different clusters—it isn’t 100% one or the other. This could be highly useful for our purposes since jazz has so much crossover.
Example: Categorizing Music
|
169
Let’s go through the process of writing our own code and then use it to map the same data that we have from our jazz data set, then see what happens. Our first step is to be able to initialize the cluster. If you remember we need to have indicator variables zt that follow a uniform distribution. These tell us the probability that each data point is in each cluster.
Named Tuples Tuples in Python are basically immutable arrays. They are fast, and useful when mov‐ ing data around. But what is a named tuple? Think of named tuples as lightweight objects that we can use instead of defining a new class. In other languages they might be called structs. For example, imagine you want to look at points on a Cartesian x,y graph. We could say that a point is basically just (x,y) or we could use a named tuple: from collections import namedtuple point = (1.0, 5.0) Point = namedtuple('Point', 'x y') named_point = Point(1.0, 5.0) point[0] == named_point.x point[1] == named_point.y
It’s more or less syntactic sugar but it can make the code much easier to read with lightweight objects to wrap tuple data in.
Stepping back into our example, we first need to write a helper function that returns the density of a multivariate normal distribution. This is based on how R does this: from collections import namedtuple import random import logging import math import numpy as np from numpy.linalg import LinAlgError def dvmnorm(x, mean, covariance, log=False): """density function for the multivariate normal distribution based on sources of R library 'mvtnorm' :rtype : np.array :param x: vector or matrix of quantiles. If x is a matrix, each row is taken to be a quantile :param mean: mean vector, np.array :param covariance: covariance matrix, np.array
170
|
Chapter 9: Clustering
:param log: if True, densities d are given as log(d), default is False """ n = covariance.shape[0] try: dec = np.linalg.cholesky(covariance) except LinAlgError: dec = np.linalg.cholesky(covariance + np.eye(covariance.shape[0]) * 0.0001) tmp = np.linalg.solve(dec, np.transpose(x - mean)) rss = np.sum(tmp * tmp, axis=0) logretval = - np.sum(np.log(np.diag(dec))) - \ 0.5 * n * np.log(2 * math.pi) - 0.5 * rss if log: return logretval else: return np.exp(logretval)
Using all this setup we can now build an EMClustering class that will do our EM clus‐ tering and log outputs as needed. This class has the following methods of note: partitions
Will return the cluster mappings of the data if they are set. data
Will return the data object passed in. labels
Will return the membership weights for each cluster. clusters
Will return the clusters. setup
This does all of the setup for the EM clustering. class EMClustering(object): logger = logging.getLogger(__name__) ch = logging.StreamHandler() formatstring = '%(asctime)s - %(name)s - %(levelname)s - %(message)s' formatter = logging.Formatter(formatstring) ch.setFormatter(formatter) logger.addHandler(ch) logger.setLevel(logging.DEBUG) cluster = namedtuple('cluster', 'weight, mean, covariance') def __init__(self, n_clusters): self._data = None self._clusters = None self._membership_weights = None self._partitions = None
Example: Categorizing Music
|
171
self._n_clusters = n_clusters @property def partitions(self): return self._partitions @property def data(self): return self._data @property def labels(self): return self._membership_weights @property def clusters(self): return self._clusters def setup(self, data): self._n_samples, self._n_features = data.shape self._data = data self._membership_weights = np.ones((self._n_samples, self._n_clusters)) / \ self._n_clusters self._s = 0.2 indices = range(data.shape[0]) random.shuffle(indices) pick_k_random_indices = random.sample(indices, self._n_clusters) self._clusters = [] for cluster_num in range(self._n_clusters): mean = data[pick_k_random_indices[cluster_num], :] covariance = self._s * np.eye(self._n_features) mapping = self.cluster(1.0 / self._n_clusters, mean, covariance) self._clusters.append(mapping) self._partitions = np.empty(self._n_samples, dtype=np.int32)
At this point we have set up all of our base case stuff. We have @k, which is the num‐ ber of clusters, @data is the data we pass in that we want to cluster, @labels are an array full of the probabilities that the row is in each cluster, and @classes holds on to an array of means and covariances, which tells us where the distribution of data is. Last, @partitions are the assignments of each data row to cluster index. Now we need to build our expectation step, which is to figure out what the probabil‐ ity of each data row is in each cluster. To do this we need to write a new method, expect, which will do this: class EMClustering(object): # __init__ # setup()
172
|
Chapter 9: Clustering
def expect(self): log_likelyhood = 0 for cluster_num, cluster in enumerate(self._clusters): log_density = dvmnorm(self._data, cluster.mean, \ cluster.covariance, log=True) membership_weights = cluster.weight * np.exp(log_density) log_likelyhood += sum(log_density * \ self._membership_weights[:, cluster_num]) self._membership_weights[:, cluster_num] = membership_weights for sample_num, probabilities in enumerate(self._membership_weights): prob_sum = sum(probabilities) self._partitions[sample_num] = np.argmax(probabilities) if prob_sum == 0: self._membership_weights[sample_num, :] = np.ones_like(probabilities) / \ self._n_clusters else: self._membership_weights[sample_num, :] = probabilities / prob_sum self.logger.debug('log likelyhood %f', log_likelyhood)
The first part of this code iterates through all classes, which holds on to the means and covariances of each cluster. From there we want to find the inverse covariance matrix and the determinant of the covariance. For each row we calculate a value that is proportional to the probability that the row is in a cluster. This is: −
pi j = det C e
1 x − μi C−1 x j − μi 2 j
This is effectively a Gaussian distance metric to help us determine how far outside of the mean our data is. Let’s say that the row vector is exactly the mean. This equation would reduce down to pij = det(C), which is just the determinant of the covariance matrix. This is actually the highest value you can get out of this function. If for instance the row vector was far away from the mean vector, then pij would become smaller and smaller due to the exponentiation and negative fraction in the front. The nice thing is that this is proportional to the Gaussian probability that the row vector is in the mean. Since this is proportional and not equal to, in the last part we end up normalizing to sum up to 1. Now we can move on to the maximization step:
Example: Categorizing Music
|
173
class EMClustering(object): # __init__ # setup() # expect def maximize(self): for cluster_num, cluster in enumerate(self._clusters): weights = self._membership_weights[:, cluster_num] weight = np.average(weights) mean = np.average(self._data, axis=0, weights=weights) covariance = np.cov(self._data, rowvar=False, ddof=0, aweights=weights) self._clusters[cluster_num] = self.cluster(weight, mean, covariance)
Again here we are iterating over the clusters called @classes. We first make an array called sum that holds on to the weighted sum of the data happening. From there we normalize to build a weighted mean. To calculate the covariance matrix we start with a zero matrix that is square and the width of our clusters. Then we iterate through all row vectors and incrementally add on the weighted difference of the row and the mean for each combination of the matrix. Again at this point we normalize and store. Now we can get down to using this. To do that we add two convenience methods that help in querying the data: class EMClustering(object): # __init__ # setup # expect # maximize def fit_predict(self, data, iteration=5): self.setup(data) for i in range(iteration): self.logger.debug('Iteration %d', i) self.expect() self.maximize() return self
The Results from the EM Jazz Clustering Back to our results of EM clustering with our jazz music. To actually run the analysis we run the following script: import csv import numpy as np from em_clustering import EMClustering np.set_printoptions(threshold=9000) data = [] artists = [] years = []
174
| Chapter 9: Clustering
# with open('data/less_covariance_jazz_albums.csv', 'rb') as csvfile:]) clusterer = EMClustering(n_clusters=25) clusters = clusterer.fit_predict(np.array(data)) print(clusters.partitions)
The first thing you’ll notice about EM clustering is that it’s slow. It’s not as quick as calculating new centroids and iterating. It has to calculate covariances and means, which are all inefficient. Ockham’s Razor would tell us here that EM clustering is most likely not a good use for clustering big amounts of data. The other thing that you’ll notice is that our annotated jazz music will not work because the covariance matrix of this is singular. This is not a good thing; as a matter of fact this problem is ill suited for EM clustering because of this, so we have to trans‐ form it into a different problem altogether. We do that by collapsing the dimensions into the top two genres by index: import csv with open('data/less_covariance_jazz_albums.csv', 'wb') as csvout: writer = csv.writer(csvout, delimiter=',') # Write the header of the CSV file writer.writerow(['artist_album', 'key_index', 'year', 'Genre_1', 'Genre_2']) with open('data/annotated_jazz_albums.csv', 'rb') as csvin: reader = csv.DictReader(csvin) for row in reader: genre_count = 0 genres = [0, 0] genre_idx = 0 idx = 0 for key, value in row.items(): break if genre_idx == 2 if value == '1': genres[genre_idx] = idx genre_idx += 1 idx += 1 if genres[0] > 0 || genres[1] > 0: line = [row['artist_album'], row['key_index'], \ row['year'], genres[0], genres[1]] writer.writerow(line)
Example: Categorizing Music
|
175
Basically what we are doing here is saying for the first two genres, let’s assign a genre index to it and store it. We’ll skip any albums with zero information assigned to them. At this point we are able to run the EM clustering algorithm, except that things become too difficult to actually cluster. This is an important lesson with EM cluster‐ ing. The data we have actually doesn’t cluster because the matrix has become too unstable to invert. Some possibilities for refinement would be to try out K-Means or other clustering algorithms, but really a better approach would be to work on the data some more. Jazz albums are a fun example but data-wise aren’t very illustrative. We could, for instance, expand using some more musical genomes, or feed this into some sort of text-based model. Or maybe we could spelunk for musical queues using fast Fourier transforms! The possibilities are really endless but this gives us a good start.
Conclusion Clustering is useful but can be a bit hard to control since it is unsupervised. Add the fact that we are dealing with the impossibility of having consistency, richness, and scale-invariance all at once and clustering can be a bit useless in many contexts. But don’t let that get you down—clustering can be useful for analyzing data sets and split‐ ting data into arbitrary clusters. If you don’t care how they are split and just want them split up, then clustering is good. Just know that there are sometimes odd circumstances.
176
|
Chapter 9: Clustering
CHAPTER 10
Improving Models and Data Extraction
How do you go about improving upon a simple machine learning algorithm such as Naive Bayesian Classifiers, SVMs, or really any method? That is what we will delve into in this chapter, by talking about four major ways of improving models: • Feature selection • Feature transformation • Ensemble learning • Bootstrapping I’ll outline the benefits of each of these methods but in general they reduce entangle‐ ment, overcome the curse of dimensionality, and reduce correction cascades and sen‐ sitivity to data changes. They each have certain pros and cons and should be used when there is a purpose behind it. Sometimes problems are so sufficiently complex that tweaking and improvement are warranted at this level, other times they are not. That is a judgment people must make depending on the business context.
Debate Club I’m not sure if this is common throughout the world, but in the United States, debate club is a high school fixture. For those of you who haven’t heard of this, it’s a simple idea: high schoolers will take polarizing issues and debate their side. This serves as a great way for students who want to become lawyers to try out their skills arguing for a case. The fascinating thing about this is just how rigorous and disciplined these kids are. Usually they study all kinds of facts to put together a dossier of important points to 177
make. Sometimes they argue for a side they don’t agree with but they do so with conviction. Why am I telling you this? These debate club skills are the key to making machine learning algorithms (and many cases any algorithm) work better: • Collecting factual and important data • Arguing different points of view in multiple ways As you can imagine, if we could collect important or relevant data to feed into our models, and try different methods or approaches to the same problem, we will itera‐ tively get better as we find the best model combination. This gets us into what we will be talking about: picking better data or arguing for sol‐ utions more effectively.
Picking Better Data In this section we’ll be discussing how to pick better data. Basically we want to find the most compact, simplest amount of data that backs up what we are trying to solve. Some of that intuitively means that we want the data that supports our conclusion, which is a bit of cart before the horse; regardless, there are two great methods to improve the data one is using: feature selection and feature transformation algorithms. This sounds like a great idea, but what is the motivation behind picking better data? Generally speaking, machine learning methods are better suited for smaller dimen‐ sions that are well correlated with the data. As we have discussed, data can become extremely overfit, entangled, or track improperly with many dimensions. We don’t want to under- or overfit our data, so finding the best set to map is the best use of our time.
Feature Selection Let’s think about some data that doesn’t make a whole lot of sense. Say we want to measure weather data and want to be able to predict temperature given three vari‐ ables: “Matt’s Coffee Consumption,” “Ice Cream Consumption,” and “Season” (see Table 10-1 and Figure 10-1). Table 10-1. Weather data for Seattle Average temperature (°F) Matt’s coffee consumption (cups) Ice cream consumption (scoops) Month 47 4.1 2 Jan 50
178
4
|
Chapter 10: Improving Models and Data Extraction
2
Feb
Average temperature (°F) Matt’s coffee consumption (cups) Ice cream consumption (scoops) Month 54 4 3 Mar 58
4
3
Apr
65
4
3
May
70
4
3
Jun
76
4
4
Jul
76
4
4
Aug
71
4
4
Sep
60
4
3
Oct
51
4
2
Nov
46
4.1
2
Dec
Figure 10-1. A graph comparing my consumption of ice cream (in scoops) and coffee (in cups) with the temperature
Picking Better Data
|
179
Obviously you can see that I generally drink about 4 cups of coffee a day. I tend to eat more ice cream in the summertime and it’s generally hotter around that time. But what can we do with this data? There are at most N choose K solutions to any data set, so given N dimensions, we can find an enormous number of combinations of various-sized subsets. At this point we want to reduce the amount of dimensions we are looking at but don’t know where to start. In general we want to minimize the redundancy of our data while maximizing the relevancy. As you can imagine this is a tradeoff: if we keep all the data, then we’ll know 100% that we have relevant data whereas if we reduce some number of dimensions we might have redundancy—especially if we have lots and lots of dimensions. We have talked about this before as being an entanglement problem with having too many data points that point to the same thing. In general, redundancy and relevancy are calculated using the same metrics and on a spectrum: • Correlation • Mutual information • Distance from some point (Euclidean distance from reference) So they actually end up measuring the same thing. How do we solve this? Let’s first take a step back and think about what would happen if we just looked at all possibilities.
Exhaustive Search Let’s imagine that in this case we want to find the best possible dimensions to train on. We could realistically just search through all possibilities. In this case we have three dimensions which would equate to seven models (123, 12, 13, 23, 1, 2, 3). From here we could say that we want to find the model that has the highest accuracy (Figure 10-2).
180
|
Chapter 10: Improving Models and Data Extraction
Figure 10-2. Exhaustive search for best features This unfortunately doesn’t work as well as you go up in dimensions. If for instance you have 10 dimensions, the possibilities from selecting 10 dimensions, to 1 dimen‐ sion would be 210 – 1. This can be denoted in Pascal’s triangle (Figure 10-3) as a sum of combinations where: 10 10
+
10 9
+⋯+
10 1
Picking Better Data
|
181
Figure 10-3. Pascal’s triangle Pascal’s triangle shows all combinations for a given row. Since each row sums up to 2n, all we need to do is subtract 1, so we don’t account for zero dimensions. So as you add dimensions you would have to account for 2n – 1 possible data sets. If you had 3,000 dimensions (which would be a good reason to use feature selection), you would have roughly a trecentillion (10903) models to run through! Surely there is a better way. We don’t need to try every model. Instead, what if we just randomly selected features?
Random Feature Selection A lot of the time random feature selection will be useful enough for our purposes. Reducing the features by half or a certain amount is an excellent way of improving data overfitting. The added benefit is that you really don’t have to think about it much and instead try out a random feature selection of a certain percent. Say for instance you want to reduce the features by 25%. You could randomly see how it performs for accuracy, precision, or recall. This is a simple way of selecting features, but there is one major downside: what if training the model is slow? You are still brute-forcing your way to finding features. This means that you are arbitrarily pick‐ ing a number and hoping for the best. Surely there is a better way.
A Better Feature Selection Algorithm Instead of relying on random feature selection, let’s think a little more in terms of what we want to improve with our model. We want to increase relevancy while reduc‐ ing redundancy. Relevancy is a measure of how relevant the dimension in question is versus the classification whereas redundancy is a measure of how redundant the dimension is compared to all the other dimensions. Usually for relevancy and redun‐ dancy you either use correlation or mutual information. Correlation is useful for data that is continuous in nature and not nominal. By con‐ trast, mutual information gives us a discrete measure of the mutual information shared between the two dimensions in question. Using our earlier example, correlation would look like the results in Table 10-2 for relevancy and Table 10-3 for redundancy.
182
|
Chapter 10: Improving Models and Data Extraction
Table 10-2. Relevancy using correlation Dimension Correlation to temperature Matt’s coffee consumption –0.58 Ice cream
0.93
Month
0.16
Table 10-3. Redundancy using correlation Dimension Matt’s coffee consumption Ice cream Month Matt’s coffee consumption 1 –0.54 0 Ice cream
–0.54
1
0.14
Month
0
0.14
1
As you can see from these two tables, ice cream is highly correlated with temperature and my coffee consumption is somewhat negatively correlated with temperature; the month seems irrelevant. Intuitively we would think month would make a huge differ‐ ence, but since it runs on a modular clock it’s hard to model using linear approxima‐ tions. The redundancy is more interesting. Taken out of context my coffee consumption and month seem to have low redundancy, while coffee and ice cream seem more redundant. So what can we do with this data? Next I’m going to introduce a significant algorithm that brings this all together.
Minimum Redundancy Maximum Relevance Feature Selection To bring all of these competing ideas together into one unified algorithm there is minimum redundancy maximum relevance (mRMR) feature selection, which aims to maximize relevancy while minimizing redundancy. We can do this using a maximiza‐ tion (minimization) problem using NumPy and SciPy. In this formulation we can just minimize the following function: Equation 10-1. mRMR definition max Relevancy − Redundancy
Equation 10-2. Relevancy definition Relevancy =
∑in= 1 cixi ∑n i = 1 xi
Picking Better Data
|
183
Equation 10-3. Redundancy definition Redundancy =
∑n i, j = 1 ai jxix j 2 ∑n i = 1 xi
More importantly in code we have: from scipy.optimize import minimize import numpy as np matrix = np.array([ [47, 4.1, 2, 1], [50, 4, 2, 2], [54,4,3,3], [58,4,3,4], [65,4,3,5], [70,4,3,6], [76,4,4,7], [76,4,4,8], [71,4,4,9], [60,4,3,10], [51,4,2,11], [46,4.1,2,12] ]) corrcoef = np.corrcoef(np.transpose(matrix)) relevancy = np.transpose(corrcoef)[0][1:] # Set initial to all dimensions on x0 = [1,1,1] # Minimize the redundancy minus relevancy fun = lambda x: sum([corrcoef[i+1, j+1] * x[i] * x[j] for i in range(len(x)) \ for j in range(len(x))])/ \ (sum(x) ** 2) - \ (sum(relevancy * x) / sum(x)) res = minimize(fun, x0, bounds=((0,1), (0,1), (0,1))) res.x array([ 0.29820206,
1.
,
0.1621906 ])
This gives us almost exactly what we expected: my ice cream consumption models the temperature quite well. For bonus points you could use an integer programming method to get the values to be either 0 or 1, but for these purposes it’s obvious which features should be selected to improve the model.
184
| Chapter 10: Improving Models and Data Extraction
Feature Transformation and Matrix Factorization We’ve actually already covered feature transformation quite well in the previous chap‐ ters. For instance, clustering and the kernel trick are both feature transformation methods, effectively taking a set of data and projecting it into a new space, whether it’s a cluster number or an expanded way of looking at the data. In this section, though, we’ll talk about another set of feature transformation algorithms that are roo‐ ted in linear algebra. These are generally used to factor a matrix down to a smaller size and generally can be used to improve models. To understand feature transformation, let’s take a look at a few algorithms that trans‐ form a matrix into a new, more compressed or more verbose version of itself: princi‐ pal component analysis and independent component analysis.
Principal Component Analysis Principal component analysis (PCA) has been around for a long time. This algorithm simply looks at the direction with the most variance and then determines that as the first principal component. This is very similar to how regression works in that it determines the best direction to map data to. Imagine you have a noisy data set that looks like Figure 10-4.
Figure 10-4. Graphical PCA from Gaussian
Feature Transformation and Matrix Factorization
|
185
As you can see, the data has a definite direction: up and to the right. If we were to determine the principal component, it would be that direction because the data is in maximal variance that way. The second principal component would end up being orthogonal to that, and then over iterations we would reduce our dimensions by transforming them into these principal directions. Another way of thinking about PCA is how it relates to faces. When you apply PCA to a set of faces, an odd result happens known as the Eigenfaces (see Figure 10-5).
Figure 10-5. Eigenfaces (Source: AT&T Laboratories) While these look quite odd, it is fascinating that what comes out is really an average face summed up over all of the training data. Instead of implementing PCA now, we’ll wait until the next section where we implement an algorithm known as independent component analysis (ICA), which actually relies on PCA as well.
Independent Component Analysis Imagine you are at a party and your friend is coming over to talk to you. Near you is someone you hate who won’t shut up, and on the other side of the room is a washing machine that keeps making noise (see Figure 10-6).
186
|
Chapter 10: Improving Models and Data Extraction
Figure 10-6. Cocktail party example You want to know what your friend has been up to, so you listen to her closely. Being human, you are adept at separating out sounds like the washing machine and that loudmouth you hate. But how could we do that with data? Let’s say that instead of listening to your friend, you only had a recording and wanted to filter out all of the noise in the background. How would you do something like that? You’d use an algorithm called ICA. Technically, ICA minimizes mutual information, or the information shared between the two variables. This makes intuitive sense: find me the signals in the aggregate that are different. Compared to our face recognition example in Figure 10-5, what does ICA extract? Well, unlike Eigenfaces, it extracts features of a face, like noses, eyes, and hair. PCA and ICA are useful for transforming data and can analyze information even bet‐ ter (see Figure 10-7). Then we can use this more succinct data to feed our models more useful and relevant information, which will improve our models beyond just cross-validation.
Feature Transformation and Matrix Factorization
|
187
Figure 10-7. ICA extraction example Now that we know about feature transformation and feature selection, let’s discuss what we can do in terms of better arguing for a classiciation or regression point.
Ensemble Learning Up until this point we have discussed selecting dimensions as well as transforming dimensions into new ones. Both of these approaches can be quite useful when improving models or the data we are using. But there is yet another way of improving our models: ensemble learning. Ensemble learning is a simple concept: build multiple models and aggregate them together. We have already encountered this with random forests in Chapter 5. A common example of ensemble learning is actually weather. When you hear a fore‐ cast for the next week, you are most likely hearing an aggregation of multiple weather models. For instance, the European model (ECMWF) might predict rain and the US model (GFS) might not. Meterologists take both of these models and determine which one is most likely to hit and deliver that information during the evening news. When aggregating multiple models, there are two general methods of ensemble learning: bagging, a naive method; and boosting, a more elegant one.
188
|
Chapter 10: Improving Models and Data Extraction
Bagging Bagging or bootstrap aggregation has been a very useful technique. The idea is sim‐ ple: take a training set and generate new training sets off of it. Let’s say we have a training set of data that is 1,000 items long and we split that into 50 training sets of 100 a piece. (Because we sample with replacement, these 50 training sets will overlap, which is okay as long as they are unique.) From here we could feed this into 50 different models. Now at this point we have 50 different models telling us 50 different answers. Like the weather report just mentioned, we can either find the one we like the most or do something simpler, like average all of them. This is what bootstrap aggregating does: it averages all of the models to yield the aver‐ age result off of the same training set. The amazing thing about bagging is that in practice it ends up improving models substantially because it has a tendency to remove some of the outliers. But should we stop here? Bagging seems like a bit of a lucky trick and also not very elegant. Another ensemble learning tool is even more powerful: boosting.
Boosting Instead of splitting training data into multiple data models, we can use another method like boosting to optimize the best weighting scheme for a training set. Given a binary classification model like SVMs, decision trees, Naive Bayesian Classi‐ fiers, or others, we can boost the training data to actually improve the results. Assuming that you have a similar training set to what we just described with 1,000 data points, we usually operate under the premise that all data points are important or that they are of equal importance. Boosting takes the same idea and starts with the assumption that all data points are equal. But we intuitively know that not all training points are the same. What if we were able to optimally weight each input based on what is most relevant? That is what boosting aims to do. Many algorithms can do boosting but the most popular is AdaBoost. To use AdaBoost we first need to fix up the training data just a bit. There is a require‐ ment that all training data answers are either 1 or –1. So, for instance, with spam classification we would say that spam is 1 and not spam is –1. Once we have changed our data to reflect that, we can introduce a special error function: − y i f xi
E f x , y, i = e
Ensemble Learning
|
189
This function is quite interesting. Table 10-4 shows all four cases. Table 10-4. Error function in all cases f(x) y
e
− yi f xi
1
1
1 e
–1
1
e
1
–1 e
–1
–1 1 e
As you can see, when f(x) and y equal, the error rate is minimal, but when they are not the same it is much higher. From here we can iterate through a number of iterations and descend on a better weighting scheme using this algorithm: • Choose a hypothesis function (either SVMs, Naive Bayesian Classifiers, or some‐ thing else) — Using that hypothesis, sum up the weights of points that were missclassified: � = ∑h x
≠ yw
— Choose a learning rate based on the error rate: 1
α = 2 ln
1−� �
• Add to the ensemble: F x = F t − 1 x + αht x • Update weights: − yiαtht xi
wi, t + 1 = wi, te for all weights
• Renormalize weights by making sure they add up to 1
190
|
Chapter 10: Improving Models and Data Extraction
What this does is converge on the best possible weighting scheme for the training data. It can be shown that this is a minimization problem over a convex set of func‐ tions. This meta-heuristic can be excellent at improving results that are mediocre from any weak classifier like Naive Bayesian Classification or others like decision trees.
Conclusion You’ve learned a few different tricks of the trade with improving existing models: fea‐ ture selection, feature transformation, ensemble learning, and bagging. In one big graphic it looks something like Figure 10-8.
Figure 10-8. Feature improvement in one model Conclusion
|
191
As you can see, ensemble learning and bagging mostly focus on building many mod‐ els and trying out different ideas, while feature selection and feature transformation are about modifying and studying the training data.
192
|
Chapter 10: Improving Models and Data Extraction
CHAPTER 11
Putting It Together: Conclusion
Well, here we are! The end of the book. While you probably don’t have the same depth of understanding as a PhD in machine learning, I hope you have learned some‐ thing. Specifically, I hope you’ve developed a thought process for approaching prob‐ lems that machine learning works so well at solving. I firmly believe that using tests is the only way that we can effectively use the scientific method. It is the reason the modern world exists, and it helps us become much better at writing code. Of course, you can’t write a test for everything, but it’s the mindset that matters. And hopefully you have learned a bit about how you can apply that mindset to machine learning. In this chapter, we will discuss what we covered at a high level, and I’ll list some suggested reading so you can dive further into machine learning research.
Machine Learning Algorithms Revisited As we touched on earlier in the book, machine learning is split into three main cate‐ gories: supervised, unsupervised, and reinforcement learning (Table 11-1). This book skips reinforcement learning, but I highly suggest you research it now that you have a better background. I’ll list a source for you in the final section of this chapter. Table 11-1. Machine learning categories Category Supervised
Description Supervised learning is the most common machine learning category. This is functional approximation. We are trying to map some data points to some fuzzy function. Optimization-wise, we are trying to fit a function that best approximates the data to use in the future. It is called “supervised” because it has a learning set given to it.
Unsupervised
Unsupervised learning is just analyzing data without any sort of Y to map to. It is called “unsupervised” because the algorithm doesn’t know what the output should be and instead has to come up with it itself.
193
Category Description Reinforcement Reinforcement learning is similar to supervised learning, but with a reward that is generated from each step. For instance, this is like a mouse looking for cheese in a maze. The mouse wants to find the cheese and in most cases will not be rewarded until the end when it finally finds it.
There are generally two types of biases for each of these categories: restriction and preference. Restriction bias is what limits the algorithm, while preference is what sort of problems it prefers. All of this information (shown in Table 11-2) helps us determine whether we should use each algorithm or not. Table 11-2. Machine learning algorithm matrix Algorithm K-Nearest Neighbors
Type Supervised
Class Instance based
Restriction bias Generally speaking, KNN is good for measuring distance-based approximations; it suffers from the curse of dimensionality
Preference bias Prefers problems that are distance based
Naive Bayes
Supervised
Probabilistic
Works on problems where the inputs are independent from each other
Prefers problems where the probability will always be greater than zero for each class
Decision Trees/ Random Forests
Supervised
Tree
194
|
Chapter 11: Putting It Together: Conclusion
Will work on just about anything Prefers data that isn’t highly variable
How to Use This Information to Solve Problems Using Table 11-2, we can figure out how to approach a given problem. For instance, if we are trying to determine what neighborhood someone lives in, KNN is a pretty good choice, whereas Naive Bayesian Classification makes absolutely no sense. But Naive Bayesian Classification could determine sentiment or some other type of prob‐ ability. The SVM algorithm works well for problems such as finding a hard split between two pieces of data, and it doesn’t suffer from the curse of dimensionality nearly as much. So SVM tends to be good for word problems where there’s a lot of features. Neural networks can solve problems ranging from classifications to driving a car. HMMs can follow musical scores, tag parts of speech, and be used well for other system-like applications. Clustering is good at grouping data together without any sort of goal. This can be useful for analysis, or just to build a library and store data effectively. Filtering is well suited for overcoming the curse of dimensionality. We saw it used predominantly in Chapter 3 by focusing on important attributes of mushrooms like cap color, smell, and the like. What we didn’t touch on in the book is that these algorithms are just a starting point. The important thing to realize is that it doesn’t matter what you pick; it is what you are trying to solve that matters. That is why we cross-validate, and measure precision, recall, and accuracy. Testing and checking our work every step of the way guarantees that we at least approach better answers. I encourage you to read more about machine learning models and to think about applying tests to them. Most algorithms have them baked in, which is good, but to write code that learns over time, we mere humans need to be checking our own work as well.
What’s Next for You? This is just the beginning of your journey. The machine learning field is rapidly grow‐ ing every single year. We are learning how to build robotic self-driving cars using deep learning networks, and how to classify health problems. The future is bright for machine learning, and now that you’ve read this book you are better equipped to learn more about deeper subtopics like reinforcement learning, deep learning, artifi‐ cial intelligence in general, and more complicated machine learning algorithms. There is a plethora of information out there for you. Here are a few resources I recommend: • Peter Flach, Machine Learning: The Art and Science of Algorithms That Make Sense of Data (Cambridge, UK: Cambridge University Press, 2012).
How to Use This Information to Solve Problems
|
195
• David J. C. MacKay, Information Theory, Inference, and Learning Algorithms (Cambridge, UK: Cambridge University Press, 2003). • Tom Mitchell, Machine Learning (New York: McGraw-Hill, 1997). • Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd Edition (London: Pearson Education, 2009). • Toby Segaran, Programming Collective Intelligence: Building Smart Web 2.0 Appli‐ cations (Sebastopol, CA: O’Reilly Media, 2007). • Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction (Cambridge, MA: MIT Press, 1998). Now that you know a bit more about machine learning, you can go out and solve problems that are not black and white, but instead involve many shades of gray. Using a test-driven approach, as we have throughout the book, will equip you to see these problems through a scientific lens and to attempt to solve problems not by being true or false but instead by embracing a higher level of accuracy. Machine learning is a fas‐ cinating field because it allows you to take two divergent ideas like computer science, which is theoretically sound, and data, which is practically noisy, and zip them together in one beautiful relationship.
196
|
Chapter 11: Putting It Together: Conclusion
Index
A
activation functions, 135-140 Adaboost, 189 Adzic, Gojko, 6 Agile Manifesto, 5 algorithms summary, 17-18, 193-194 AmazonFresh, 6 artificial neural networks, 130
B
back propagation algorithm, 141, 142 bagging, 74-75, 189 Bain, Alexander, 130 Bayes' theorem, 46 (see also Naive Bayesian Classification) BeautifulSoup, 54 Beck, Kent, 5 Boolean logic, 130 boosting, 189-191 bootstrap aggregation (see bagging) Brown Corpus, 95-106 (see also part-of-speech tagging with the Brown Corpus)
C
chain rule, 47 Changing Anything Changes Everything (CACE), 9 clustering, 157-176, 195 consistency in, 165 data gathering, 166 EM algorithm, 163-165, 169-176 example with music categorization, 166-176 fitness functions, 160
ground truth testing, 161 and the impossibility theorem, 165 K-Means algorithm, 161-163, 168-169 richness in, 165 scale invariance in, 165 sillhouette coefficient, 160 testing cluster mappings, 160-161 user cohorts, 158-160 code debt, 6 cohorts, 158-160 computational distances, 27-29 conditional probabilities, 44 confirmation bias, 10 confusion matrices, 77 consistency, 165 corpus/corpora, 116 CorpusParser, 97-99 correlation, 182 cosine distance/cosine similarity, 27 cosine/sine waves, 140 CPI index, 22 cross-validation error minimization through, 62-65 in KNN testing, 39 in part-of-speech tagging, 105 in sentiment analysis, 122-123 network classes and, 151-154 Cunningham, Ward, 5 curse of dimensionality, 31, 110
D
data classification, 67-83 confusion matrices, 77 domain knowledge, 68-69
197
subcategorization, 70-73 (see also decision trees) data collection, 109 data extraction, improving, 178-184 (see also feature selection) data, unstable, 11 decision boundary methods, 110-111 decision trees, 71-83 bagging, 74-75 coding and testing design, 76-80 continuous, 73 GINI impurity, 72 information gain, 71 overfitting, 73 pruning, 73-82 random forests, 75 testing, 80-82 variance reduction, 73 default dictionaries, 100 delta rule, 142 Dependency Inversion Principle (DIP), 4, 11 design debt, 6 distances, as class of functions, 25-31
E
EM clustering, 163-165, 169-176 email class, 51-54 emissions/observations of underlying states, 87-88 encapsulation, 3 ensemble learning, 188 bagging, 74-75 random forests, 75 entanglement, 9, 180 entropy, 72 epochs, 141, 146 Euclidean distance, 26 expectation, 164, 172 experimental paths, 11 exponentially weighted moving average, 126
G
geometrical distance, 26 GINI impurity, 72 glue code, 9 gradient descent algorithm, 141 ground truth, 161
H
Hanson, David Heinemer, 5 hedonic regression, 22 hidden feedback loops, 10 Hidden Markov models (HMMs), 85-106, 195 components, 90 decoding component, 94 emissions/observations of underlying states, 87-88 evaluation component, 90-94 Forward-Backward algorithm, 90-94 learning component, 95 Markov Assumption, 89 overview, 85 part-of-speech tagging with, 95-106 (see also part-of-speech tagging with the Brown Corpus) Python setup for, 96 tracking user behavior, 85-87 Viterbi component, 94 hill climbing problem, 35 Hodges, J. L. Jr., 24
I
F
Fahlman, Scott, 143 Feathers, Michael, 2 feature selection, 178-180 feature transformation, 185-188 (see also clustering) independent component analysis (ICA), 186-188
198
kernel trick, 111-114 principal component analysis (PCA), 185-186 feedback loops, hidden, 10 the Five Principles (see SOLID) Fix, Evelyn, 24 folds, 39-41 Forward Backward algorithm, 90-94 function approximation, 15
|
Index
impossibility theorem, 165 improving models (see model improvement) independent component analysis (ICA), 186-188 information gain, 71 Interface Segregation Principle (ISP), 3, 11 intuitive models (see Hidden Markov model)
iterations, 141
J
Jaccard distance, 31 James, William, 130 joint probability, 47
K
K-Means clustering, 161-163, 168-169 K-Nearest Neighbors algorithm, 21-41, 195 distance functions, 25-31 history of, 24 KNN testing, 39-41 picking K, 32-35 algorithms for, 35 guessing, 32 heuristics for, 33-35 regressor construction, 37-38 valuation example, 35-41 KDTree, 37 kernel trick, 111-114 KNN (see K-Nearest Neighbors algorithm)
L
Laplace, Pierre-Simon, 46 Levenshtein distance, 29 Liskov Substitution Principle (LSP), 3, 10
M
machine learning algorithms summary, 17-18, 193-194 defined, 7, 15 introduction to, 15-19 reinforcement learning, 17 supervised learning, 15 technical debt issues with, 8 unsupervised learning, 16 Mahalanobis distance, 30 Manhattan distance, 29 Markov Assumption, 89 Markov chains, 89 Martin, Robert, 2, 4 mathematical notations table, 18 maximization, 165, 173 maximum likelihood estimate, 99 McCulloch, Warren, 130 mean absolute error, 36 mean distances, 160
Metz, Sandi, 4 Minkowski distance, 26 model improvement, 177-192 bagging, 189 boosting, 189-191 data extraction, 178-184 ensemble learning, 188 exhaustive search, 180-182 feature selection improved algorithm for, 182-183 minimum redundancy maximum rele‐ vance (mRMR), 183-184 random, 182
N
Naive Bayesian Classification, 43-65, 89, 195 chain rule, 47 conditional probabilities, 44 inverse conditional probability, 46 (see also Bayes' theorem) joint probabilities, 47 naiveté in Bayesian reasoning, 47-49 probability symbols, 44-45 pseudocount, 49 spam filter creation, 50-65 (see also spam filters) named tuples, 170 neural networks, 129-155 activation functions, 135-140 artificial, 130 back propagation algorithm, 141, 142 cross-validation, 151-154 defined, 130 delta rule, 142 error tolerance, 146 hidden layers, 134, 145-146 history of, 130 input layer, 132-134 language classification with, 147-154 max epochs, 146 neurons, 135, 140, 146 output layer, 141 precision and recall, 154 QuickProp algorithm, 141, 143 RProp algorithm, 141, 143 standard inputs, 134 symmetric inputs, 134 training algorithms, 141 tuning, 154
Index
|
199
neurons, 135, 140, 146 NumPy, 37, 78, 152, 183
O
Open/Closed Principle (OCP), 3, 10 overfitting, 73, 114
P
Pandas, 37, 78 part-of-speech tagging with the Brown Corpus, 95-106 cross-validation testing, 105 improving on, 106 seam (CorpusParser), 97-99 writing, 99-105 Pascal's triangle, 182 perceptrons, 131 pipeline jungles, 11 Pitts, Walter, 130 precision and recall, 154 predictive policing, 10 principal component analysis (PCA), 185-186 probability conditional, 44 inverse conditional, 46 (see also Bayes' theorem) joint, 47 state, 87-88 probability symbols, 44-45 pseudocount, 49 Pythagorean theorem, 25 Python dictionaries in, 100 installing, 18 packages, 37 setup for sentiment analysis, 115 setup notes, 50 unit testing, 52-52
Q
QuickProp algorithm, 141, 143
R
random forests, 75 recall, 154 redundancy, 182-184 refactoring, 5-6, 13 regression, 79
200
|
Index
regression, hedonic, 22 reinforcement learning, 17 relevancy, 182-184 richness, 165 Riedmiller, Martin, 143 Riel, Arthur, 4 RProp algorithm, 141, 143
S
sales funnel, 85 scale invariance, 165 scikit-learn, 37, 78 SciPy, 37, 183 seams, 97 sentiment analysis, 107-109 (see also Support Vector Machines (SVMs)) aggregating sentiment, 125-127 example, with SVMs, 114-125 exponenatially weighted moving average, 126 mapping sentiment to bottom line, 127 sigmoidal functions, 140 silhouette coefficient, 160 Single Responsibility Principle (SRP), 2, 9 slack, 114 SOLID, 2-4, 9-12 spam filters creation with Naive Bayesian Classifier, 50-65 building the classifier, 59-61 calculating a classification, 61-62 coding and testing design, 50 data source, 51 email class, 51-54 error minimization with crossvalidation, 62-65 storing training data, 57-58 tokenization and context, 54-56 SpamTrainer, 56-65 SRP (see Single Responsibility Principle) state machines defined, 86 emissions in state probabilities, 87-88 for tracking user behavior, 85-87 Markov chains versus, 89 simplification through Markov assumption, 89 statistical distances, 29-31 supervised learning, 15
training algorithms, 141 transition matrix, 86 triangle inequality, 25 tuples, 170
Support Vector Machines (SVMs), 107-128, 195 (see also sentiment analysis) coding and testing design, 115 cross-validation testing, 122-123 data collection, 109 decision boundaries, 110-111 kernel trick for feature transformation, 111-114 optimizing with slack, 114 sentiment analysis example, 114-125 testing strategies, 116-125
unstable data, 11 unsupervised learning, 16, 157 (see also clustering) user behavior (see Hidden Markov model) user cohorts, 158-160
T
V
Taxicab distance, 29 TDD (Test-Driven Design/Development), 2, 4-5, 12 technical debt, 6 test-driven damage, 5 threshold logic, 130 tokenization, 54-56, 117 tombstones, 12
U
variance reduction, 73 visibility debt, 11 Viterbi algorithm, 94, 103-105
W
waterfall model, 4 Webvan, 6
Index
|
201
About the Author Matthew Kirk is a data architect, software engineer, and entrepreneur based out of Seattle, WA. For years, he struggled to piece together his quantitative finance back‐ ground with his passion for building software. Then he discovered his affinity for solving problems with data. Now, he helps multimillion dollar companies with their data projects. From diamond recommendation engines to marketing automation tools, he loves educating engi‐ neering teams about methods to start their big data projects. To learn more about how you can get started with your big data project (beyond reading this book), check out matthewkirk.com/tml for tips.
Colophon The animal on the cover of Thoughtful Machine Learning with Python is the Cuban solenodon (Solenodon cubanus), also know as the almiqui. The Cuban solenodon is a small mammal found only in the Oriente province of Cuba. They are similar in appearance to members of the more common shrew family, with long snouts, small eyes, and a hairless tail. The diet of the Cuban solenodon is varied, consisting of insects, fungi, and fruits, but also other small animals, which they incapacitate with venomous saliva. Males and females only meet up to mate, and the male takes no part in raising the young. Cuban solenodons are nocturnal and live in subterranean burrows. The total number of Cuban solenodons is unknown, as they are rarely seen in the wild. At one point they were considered to be extinct, but they are now classified as endangered. Predation from the mongoose (introduced during Spanish colonization) as well as habitat loss from recent construction have negatively impacted the Cuban solenodon population.. | https://dokumen.pub/thoughtful-machine-learning-with-python-a-test-driven-approach-first-edition-9781491924136-1601601611-1491924136.html | CC-MAIN-2022-33 | refinedweb | 50,952 | 54.83 |
Denormals, NaNs, and infinities round out the set of standard floating-point values, and these important values can sometimes cause performance problems. The good news is, it’s getting better, and there are diagnostics you can use to watch for problems.
In this post I briefly explain what these special numbers are, why they exist, and what to watch out for.
This article is the last of my series on floating-point. The complete list of articles in the series is:
- 1: Tricks With the Floating-Point Format – an overview of the float format
- 2: Stupid Float Tricks – incrementing the integer representation of floats
- 3: Don’t Store That in a Float – a cautionary tale about time
- 3b: They sure look equal… – special bonus post (not on altdevblogaday)
- 4: Comparing Floating Point Numbers, 2012 Edition – tricky but important
- 5: Float Precision—From Zero to 100+ Digits – what does precision mean, really?
- 5b: C++ 11 std::async for Fast Float Format Finding – special bonus post (not on altdevblogaday) on fast scanning of all floats
- 6: Intermediate Precision – their effect on performance and results
- 7.0000001: Floating-Point Complexities – a lightning tour of all that is weird about floating point
- 8: Exceptional Floating point – using floating-point exceptions to find bugs
- 9: That’s Not Normal–the Performance of Odd Floats
The special float values include:
Infinities
Positive and negative infinity round out the number line and are used to represent overflow and divide-by-zero. There are two of them.
NaNs
NaN stands for Not a Number and these encodings have no numerical value. They can be used to represent uninitialized data, and they are produced by operations that have no meaningful result, like infinity minus infinity or sqrt(-1). There are about sixteen million of them, they can be signaling and quiet, but there is otherwise usually no meaningful distinction between them.
Denormals
Most IEEE floating-point numbers are normalized – they have an implied leading one at the beginning of the mantissa. However this doesn’t work for zero so the float format specifies that when the exponent field is all zeroes there is no implied leading one. This also allows for other non-normalized numbers, evenly spread out between the smallest normalized float (FLT_MIN) and zero. There are about sixteen million of them and they can be quite important.
If you start at 1.0 and walk through the floats towards zero then initially the gap between numbers will be 0.5^24, or about 5.96e-8. After stepping through about eight million floats the gap will halve – adjacent floats will be closer together. This cycle repeats about every eight million floats until you reach FLT_MIN. At this point what happens depends on whether denormal numbers are supported.
If denormal numbers are supported then the gap does not change. The next eight million numbers have the same gap as the previous eight million numbers, and then zero is reached. It looks something like the diagram below, which is simplified by assuming floats with a four-bit mantissa:
With denormals supported the gap doesn’t get any smaller when you go below FLT_MIN, but at least it doesn’t get larger.
If denormal numbers are not supported then the last gap is the distance from FLT_MIN to zero. That final gap is then about 8 million times larger than the previous gaps, and it defies the expectation of intervals getting smaller as numbers get smaller. In the not-to-scale diagram below you can see what this would look like for floats with a four-bit mantissa. In this case the final gap, between FLT_MIN and zero, is sixteen times larger than the previous gaps. With real floats the discrepancy is much larger:
If we have denormals then the gap is filled, and floats behave sensibly. If we don’t have denormals then the gap is empty and floats behave oddly near zero.
The need for denormals
One easy example of when denormals are useful is the code below. Without denormals it is possible for this code to trigger a divide-by-zero exception:
float GetInverseOfDiff(float a, float b) { if (a != b) return 1.0f / (a - b); return 0.0f; }
This can happen because only with denormals are we guaranteed that subtracting two floats with different values will give a non-zero result.
To make the above example more concrete lets imagine that ‘a’ equals FLT_MIN * 1.125 and ‘b’ equals FLT_MIN. These numbers are both normalized floats, but their difference (.125 * FLT_MIN) is a denormal number. If denormals are supported then the result can be represented (exactly, as it turns out) but the result is a denormal that only has twenty-one bits of precision. The result has no implied leading one, and has two leading zeroes. So, even with denormals we are starting to run on reduced precision, which is not great. This is called gradual underflow.
Without denormals the situation is much worse and the result of the subtraction is zero. This can lead to unpredictable results, such as divide-by-zero or other bad results.
Even if denormals are supported it is best to avoid doing a lot of math at this range, because of reduced precision, but without denormals it can be catastrophic.
Performance implications on the x87 FPU
The performance of Intel’s x87 units on these NaNs and infinites is pretty bad. Doing floating-point math with the x87 FPU on NaNs or infinities numbers caused a 900 times slowdown on Pentium 4 processors. Yes, the same code would run 900 times slower if passed these special numbers. That’s impressive, and it makes many legitimate uses of NaNs and infinities problematic.
Even today, on a SandyBridge processor, the x87 FPU causes a slowdown of about 370 to one on NaNs and infinities. I’ve been told that this is because Intel really doesn’t care about x87 and would like you to not use it. I’m not sure if they realize that the Windows 32-bit ABI actually mandates use of the x87 FPU (for returning values from functions).
The x87 FPU also has some slowdowns related to denormals, typically when loading and storing them.
Historically AMD has handled these special numbers much faster on their x87 FPUs, often with no penalty. However I have not tested this recently.
Performance implications on SSE
Intel handles NaNs and infinities much better on their SSE FPUs than on their x87 FPUs. NaNs and infinities have long been handled at full speed on this floating-point unit. However denormals are still a problem.
On Core 2 processors the worst-case I have measured is a 175 times slowdown, on SSE addition and multiplication.
On SandyBridge Intel has fixed this for addition – I was unable to produce any slowdown on ‘addps’ instructions. However SSE multiplication (‘mulps’) on Sandybridge has about a 140 cycle penalty if one of the inputs or results is a denormal.
Denormal slowdown – is it a real problem?
For some workloads – especially those with poorly chosen ranges – the performance cost of denormals can be a huge problem. But how do you know? By temporarily turning off denormal support in the SSE and SSE2 FPUs with _controlfp_s:
#include <float.h> // Flush denormals to zero, both operands and results _controlfp_s( NULL, _DN_FLUSH, _MCW_DN ); … // Put denormal handling back to normal. _controlfp_s( NULL, _DN_SAVE, _MCW_DN );
This code does not affect the x87 FPU which has no flag for suppressing denormals. Note that 32-bit x86 code on Windows always uses the x87 FPU for some math, especially with VC++ 2010 and earlier. Therefore, running this test on a 64-bit process may provide more useful results.
If your performance increases noticeably when denormals are flushed to zero then you are inadvertently creating or consuming denormals to an unhealthy degree.
If you want to find out exactly where you are generating denormals you could try enabling the underflow exception, which triggers whenever one is produced. To do this in a useful way you would need to record a call stack and then continue the calculation, in order to gather statistics about where the majority of the denormals are produced. Alternately you could monitor the underflow bit to find out which functions set it. See Exceptional Floating Point for some thoughts on this, or read this paper.
Don’t disable denormals
Once you prove that denormals are a performance problem you might be tempted to leave denormals disabled – after all, it’s faster. But if it gives you a speedup means that you are using denormals a lot, which means that if you disable them you are going to change your results – your math is going to get a lot less accurate. So, while disabling denormals is tempting, you might want to consider investigating to find out why so many of your numbers are so close to zero. Even with denormals in play the accuracy near zero is poor, and you’d be better off staying farther away from zero. You should fix the root cause rather than just addressing the symptoms.
In your other articles you mention denormals a couple of times. This article explaining what they are all about in plain English is really nice.
Update: some very smart physics friends tell me that denormals frequently occur in physics due to the use of equations which iterate until values go to zero. These denormals hurt performance without really adding value. So, physics programmers often find it valuable to disable denormals in order to regain lost performance.
That or add code to to terminate such iterations at a more reasonable value? We face these kinds of problems all the time in glibc’s libm.
Signal processing, especially audio processing, is another area where recursive iterations usually lead to dernormals. Disabling them is generally the best practical solution too, because it would be extremely expensive to check the signal and disable the feedback loops at every computation stage. Moreover, it’s usually a null signal that triggers the denormal slowness (think of this simple iteration: y[n] = x[n-1] * 0.1 + y[n-1] * 0.9), so curing the processing at one stage could worsen it at a subsequent stage!
But if you can’t disable denormals, you can still inject noise at some points to ensure that the signal stays away from the denormal range…
Pingback: Floating-point complexities | Random ASCII
Besides physics, audio synthesis and effects are very prone to denormal issues. For instance, the common low-pass filter algorithms that are crucial to the distinctive sound of analog-style synthesizers will quickly produce performance-bruising denormals if fed a signal followed by silence. Fortunately, for audio applications, the precision afforded by denormals isn’t necessary (FLT_MIN represents signals down around -140dB), so avoiding denormals by enabling flush-to-zero or by injecting arbitrary signal at very low amplitude works fine.
Pingback: Comparing Floating Point Numbers, 2012 Edition | Random ASCII
Pingback: Floating-Point Determinism | Random ASCII
“if (a != b) return 1.0f / (a – b);”
That’s called checking the wrong condition. You should’ve checked “if (a – b != 0.0f)”. Of course, it’s not obviously wrong so it will happen.
But the point is that given IEEE floats (with denormals supported) the two are equivalent. Checking whether a and b are equal is not wrong because if denormals are supported then two finite floats that are not equal will always have a non-zero difference. | https://randomascii.wordpress.com/2012/05/20/thats-not-normalthe-performance-of-odd-floats/ | CC-MAIN-2015-32 | refinedweb | 1,913 | 61.67 |
Hello! I have been working on a game where the objective will eventually be to avoid randomly generated side-scrolling blocks with a user-controlled piece, as of now only controlled by buttons. I have a few questions though.Hello! I have been working on a game where the objective will eventually be to avoid randomly generated side-scrolling blocks with a user-controlled piece, as of now only controlled by buttons. I have a few questions though.
I have looked in many tutorials and places online, but I can't seem to get key events to work. I want to have the option of controlling the player with a key event or the buttons that already exist. If someone could just give me an example of how I could implement this in my code it would be great.
My second question is about how to create animation in a JPanel, specifically side-scrolling boxes. Where should I put the code? I think I understand the logic that I would use, I'm just unsure of the specifics. A friend said that a timer object is an easy way to do this but I've never used one. Would you agree?
If someone could help me out with any of this I would be really grateful.
Here is the panel code:
Code:package BlockGame; import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.util.Random; public class BlockGamePanel extends JPanel implements ActionListener, KeyListener { Random generator = new Random(); final int INITIALX=300; final int INITIALY=150; JButton right, left, up, down; int x=50, y=50; public BlockGamePanel(int width, int height) { left = new JButton ("left"); add(left); left.addActionListener(this); right = new JButton("right"); add (right); right.addActionListener(this); up = new JButton ("up"); add (up); up.addActionListener(this); down = new JButton ("down"); add (down); down.addActionListener(this); } public void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.red); g.fillRect (x,y,15,15); repaint(); } //move the player piece around public void actionPerformed (ActionEvent event) { if (event.getSource() == right) x+=15; if (event.getSource() == left) x-=15; if (event.getSource() == up) y-=15; if (event.getSource() == down) y+=15; repaint(); } public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { } public void keyReleased(KeyEvent e) { } }
And, just in case, this is the run class.
Code:package BlockGame; import java.awt.*; import javax.swing.*; public class BlockGame { public static void main(String[] args) { JFrame frame = new JFrame ("BlockGame"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setPreferredSize(new Dimension(400,400)); frame.getContentPane().add(new BlockGamePanel(400,400)); frame.pack(); frame.setVisible(true); } } | http://www.codingforums.com/java-and-jsp/196273-help-key-events-jpanel-animation.html | CC-MAIN-2016-18 | refinedweb | 433 | 58.38 |
Problem with navigation move_base [closed]
Hello, I have a problem with navigation in rviz. I have a model P3dx, I want to set goal on the map and my model have to move to this point. I am newbie in ROS, and tutorials don't hel me :/
I use Ubuntu 18.04 and ROS Melodic
I describe, what I done.
- I done a map thanks gmapping. I have my map in .yaml and png.
- Run the map -
rosrun map_server map_server src/PioneerModel/map.yaml
- I Open rviz and load the map on topic
/map
- Open node move_base and amcl
rosrun move_base move_base / rosrun amcl amcl
After open amcl I got a warning:
[ WARN] [1557425810.906842985, 3433.961000000]: No laser scan received (and thus no pose updates have been published) for 3433.961000 seconds. Verify that data is being published on the /scan topic.
In rviz I can't load laserscan - error:
Transform [sender=unknown_publisher] For frame [lms100]: No transform to fixed frame [map]. TF error: [Lookup would require extrapolation into the future. Requested time 4097.345000000 but the latest data is at time 2376.773000000, when looking up transform from frame [lms100] to frame [map]]
I tried to open my navigation node:
#include <ros/ros.h> #include <move_base_msgs/MoveBaseAction.h> #include <actionlib/client/simple_action_client.h> typedef actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient; int main(int argc, char **argv) { ros::init(argc, argv, "navigation_goals"); MoveBaseClient ac("move_base", true); while (!ac.waitForServer(ros::Duration(5.0))) { ROS_INFO("Waiting for the move_base action server"); } move_base_msgs::MoveBaseGoal goal; goal.target_pose.header.frame_id = "map"; goal.target_pose.header.stamp = ros::Time::now(); goal.target_pose.pose.position.x = 9.8; goal.target_pose.pose.position.y = -16.8; goal.target_pose.pose.orientation.w = 1.0; ROS_INFO("Sending goal: base"); ac.sendGoal(goal); ac.waitForResult(); if (ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED) ROS_INFO("You have arrived to the charging base"); else { ROS_INFO("The base failed for some reason"); } return 0; }
But I only have Waiting for the move_base action server
I know, that better is use launch file, but now, I want to move my model, next I will be "cleaning".
I Think that error is from load map.yaml on topic /map, but my knowledge about ROS is low and I can't handle with it :/
Thanks in advance!
#EDIT
I had wrong topic send to amcl, so now, amcl node is correct and I have answer:
[ INFO] [1557695476.070436657]: Requesting the map... [ INFO] [1557695476.079127924]: Received a 312 X 536 map @ 0.050 m/pix [ INFO] [1557695476.099264435]: Initializing likelihood field model; this can take some time on large maps... [ INFO] [1557695476.181160845]: Done initializing likelihood field model.
topic tf:
Type: tf2_msgs/TFMessage Publishers: * /gazebo () * /robot_state_publisher () * /amcl () * /slam_gmapping () Subscribers: * /rviz_1557694756783817626 () * /amcl () * /slam_gmapping () * /move_base_node () * /rviz_1557695649952641928 ()
scan:
(more)(more)
Type: sensor_msgs/LaserScan Publishers: * /gazebo () Subscribers: * /rostopic_11205_1557695085155 () * /amcl () * /slam_gmapping () * /rviz_1557695649952641928 ...
movebase action server will not start until movebase has received a laser scan (among other things). So you need to address the laser scan problem first.
Check those topics and make sure they all have data being sent to them
Sory for late answear, but I don't have access to my project in the weekend. I edited my post, I understand that amcl node was looking for in wrong topic and I correct this.
Tyrel - I checked this topics, and they are subscribers.
Now, I have in rviz my static map( but have wrong orientaion), I can see laser scan, but when I run my navigation node, I still have Waiting for the move_base action server
anyone know how the quarterion/orientation in send goal navigation code works? | https://answers.ros.org/question/322857/problem-with-navigation-move_base/ | CC-MAIN-2021-04 | refinedweb | 597 | 58.28 |
Reverse Engineering a Uniden Cordlessphone LCD
I recently upgraded my home phone system and thus was left with a couple of old Uniden DCT648-2 handsets. Most of the components inside are probably not salvageable but these handsets use 3×16 character LCDs, so it would be nice if I could reuse them in my other projects.
The LCD inside the DCT648-2 handset has 8 pins. This suggests that it is using some kind of serial protocol. Unfortunately, I could not find any information on the LCD used here so I decided to hook it up to a logic analyzer and see if I could reverse engineering the protocol used to drive the LCD.
A few of the pins can be easily identified. Pin 2 (from left to right) is clearly connected to the circuit ground. And pin 3 measures 3.3V when the phone is powered on which suggests that the LCD uses 3.3V logic. Pin 1 is roughly -2.6 with respect to the ground. If you look at the PCB carefully, you will see a capacitor is placed between pin 1 and ground. Since there is no visible trace coming out from pin 1, I think it is used solely for connecting an external capacitor for the builtin charge pump to provide the negative voltage needed by the LCD.
With this information, I hooked up pin 2 through pin 8 to my logic analyzer. After looking at the captured data, it became clear that this LCD is using some kind of slightly modified SPI protocol. We can also easily identify that pin 6 is the SPI clock, pin 7 is the MOSI and pin 8 is the CS pin. From the captured data we can also see that the clock remains high when inactive (CPOL=1).
Pin 5 doesn’t look to conform to any standard SPI signal, but as you will see shortly it is presumably used to load command to initialize the LCD. Now the only question remaining is whether the data is loaded on the leading edge or the trailing edge of the clock.
After comparing the data captured using data valid on rising edge settings (CPHA=0) and data valid on trailing edge settings (CPHA=1) it became clear that CPHA=1 is used as when setting CPHA to 0 the decoded output sometimes become garbled, which suggests that the signal is not yet settled on the rising edge.
So now we know the SPI mode the LCD is communicating with: CPOL=1 and CPHA=1. This corresponds to SPI_MODE3 in Arduino. Since the character LCD is 3×16 (48 bytes) but each frame contains 87 bytes and the actual character displayed on the LCD starts from the 40th byte so the first 39 bytes must be some kind of initialization data that prepares the LCD for data display. The first 39 bytes also do not seem to change when the display changes. The 39th byte corresponds to the last low-to-high transition of pin 2 so clearly pin 2 is used to signify the beginning of the display data.
To test whether the above assumptions are correct, I used a 3.3V powered Arduino prototyping board I built a while back and wrote a simple program (see below). The code basically just output ASCII characters from 65 (“A”) and upwards until it fills all 48 spaces on the screen.
Instead of mounting the LCD onto a protoboard, I simply cut the signal traces and used an ATMega328p running Arduino to talk to the display.
And here is a screenshot of the output.
The corresponding code I used is listed below, the 39 bytes header info is somewhat mysterious but it seems to be working just fine by sending these verbatim each time with the correct timing and command pin toggle.
#include <SPI.h> const int pinMOSI = 11; //MOSI const int pinMISO = 12; //MISO const int pinSPIClock = 13; //SCK const int pinCS = 10; //CS const int pinCmd = 9; //CMD char header[] = { 0x31, 0x31, 0x31, 0x06, 0xc0, 0x0e, 0x11, 0x13, 0x17, 0x1f, 0x1f, 0x1f, 0x00, 0x04, 0x0e, 0x0e, 0x0e, 0x0e, 0x1f, 0x04, 0x00, 0x00, 0x02, 0x15, 0x05, 0x15, 0x02, 0x00, 0x00, 0x00, 0x1b, 0x12, 0x1b, 0x12, 0x12, 0x00, 0x00, 0x0c, 0x02}; void setup() { Serial.begin(9600); pinMode(pinMOSI, OUTPUT); pinMode(pinMISO, INPUT); pinMode(pinSPIClock, OUTPUT); pinMode(pinCS, OUTPUT); pinMode(pinCmd, OUTPUT); digitalWrite(pinCS, HIGH); digitalWrite(pinCmd, HIGH); SPI.begin(); SPI.setDataMode(SPI_MODE3); } void writeCmd() { digitalWrite(pinCmd, LOW); digitalWrite(pinCS, LOW); SPI.transfer(header[0]); digitalWrite(pinCS, HIGH); delayMicroseconds(180); digitalWrite(pinCS, LOW); SPI.transfer(header[1]); delayMicroseconds(30); digitalWrite(pinCS, HIGH); digitalWrite(pinCS, LOW); SPI.transfer(header[2]); digitalWrite(pinCS, HIGH); delayMicroseconds(30); digitalWrite(pinCS, LOW); SPI.transfer(header[3]); digitalWrite(pinCS, HIGH); delayMicroseconds(60); digitalWrite(pinCS, LOW); SPI.transfer(header[4]); digitalWrite(pinCS, HIGH); delayMicroseconds(30); digitalWrite(pinCmd, HIGH); for (int i = 5; i < 37; i++) { digitalWrite(pinCS, LOW); SPI.transfer(header[i]); digitalWrite(pinCS, HIGH); delayMicroseconds(30); } digitalWrite(pinCmd, LOW); digitalWrite(pinCS, LOW); SPI.transfer(header[37]); digitalWrite(pinCS, HIGH); delayMicroseconds(30); digitalWrite(pinCS, LOW); SPI.transfer(header[38]); digitalWrite(pinCS, HIGH); digitalWrite(pinCmd, HIGH); delayMicroseconds(30); } void writeData() { for (int i=0; i<48; i++) { digitalWrite(pinCS, LOW); SPI.transfer(i+65); digitalWrite(pinCS, HIGH); delayMicroseconds(30); } } void loop() { writeCmd(); writeData(); delay(20); }
Finally, here is a video showing the process of this reverse engineering, you can find more information of analyzing the protocol used by this LCD in the video.
[…] […]
Great article; thanks! I also had some Uniden handsets laying around…I’ve torn one down and gotten similar results, but not exact…my 39-byte header is different (but consistent) and the timing between bytes (and the preamble timing) is much different (slower) than what you have spelled out in your Arduino code example…did you ramp the timing up (reducing delays) to see what the best performance is? Or is that what you measured during logic analysis?
I haven’t cut the traces and tried controlling the LCD directly yet…hopefully today!
Thanks again!
The timing is almost exactly what the original circuit used. That said, I did experiment with different timing and the LCD worked with a pretty wide range of clock speeds.
I wish you had included a list of all the pins on the LCD and what you found them to be for. I figured out some of them from comments in the code but still can’t find them all.
Hi Steve C, I just replied to your comment on my channel. Here is the pinout:
“Except for the ground, only the four pins on the right are used. These are CMD (pin 9) SCK (pin 13) MOSI (pin 11) and CS (pin 10). MISO is not used. Hope it helps. MOSI is not explicitly specified in the code as it is only used in the SPI library.”
Did you also have to connect 3.3 volts to pin 3 and tie pin 4 high for Enable? What about connecting a capacitor to pin 1?
Correct, you will need to pull up pin 3 and pin 4 to Vcc and between pin 1 and pin 2 you should put a 1uF ceramic capacitor for the charge pump. I didn’t have to make those connections as the LCD was still connected to the original board.
Could you list the exact pinouts that you used for all the pins? i.e. lcd 1 -> Arduino “a”, lcd 2 -> Arduino “b”, etc. and if the power pins (5v, GND) are used while Arduino is connected.
I am trying to get an old one working, and so far have only been able to either fill the screen with black rectangles or nothing at all. I unfortunately don’t have a logic analyzer so I don’t know if I need a different 39 byte header, but am hoping that you could help me eliminate the option of wiring problems. Thanks!
I had removed the screen from the phone, and soldered pins onto it, but when I plug it in and upload the code, nothing shows up. Would that be because I might have pins mixed up or because pins 1 and 2 both go to ground?
If you look at the board, there is a cap between pin 1 and pin 2 which is used for the charge pump to provide the negative voltage needed by the LCD. So you can’t short pin 1 and 2 together.
Thanks! I will change the wiring, so would I have to put pin 1 to a different place? Or do I not connect it to anything?
If you take a look at this picture you will see how pin 1 and 2 of the LCD are connected.
Thanks, but does that mean I would have to find a capacitor that is the same as that one, or do I need to just leave pin 1 untouched?
You will need to find a similar valued capacitor as it is part of the charge pump circuit, without it the LCD would not have the correct negative bias and would not work.
Ok, thanks! I will try that, I might try using the capacitor from the phone even. | http://www.kerrywong.com/2017/06/04/reverse-engineering-a-uniden-cordlessphone-lcd/comment-page-1/ | CC-MAIN-2018-47 | refinedweb | 1,538 | 69.52 |
This document is also available in these non-normative formats: XML
Copyright © 2011 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This note describes two new XProc steps designed to make it easier to construct documents within an XProc pipeline using values computed by that Group Note. This document is a product of the XML Processing Model Working Group as part of the W3C XML Activity. The English version of this specification is the only normative version. However, for translations of this document, see.
This Note defines some additional optional steps for use in XProc pipelines. The XML Processing Model Working Group expects that these new steps will be widely implemented and used.
Please report errors in this document to the public mailing list public-xml-processing-model-comments's quite common in [XProc: An XML Pipeline Language] to construct documents using values computed by the pipeline. This is particularly (but not exclusively) the case when the pipeline uses the p:http-request step. The input to p:http-request is a c:request document; attributes on the c:request element control most of the request parameters; the body of the document forms the body of request.
A typical example looks like this:
<c:request <c:body> <computed-content/> </c:body> </c:request>
If we assume that the href value and the computed content come from an input document, and the username and password are options, then a typical pipeline to compute the request becomes quite complex.
<p:pipeline xmlns: <p:option <p:option <p:identity> <p:input <p:inline> <c:request </p:inline> </p:input> </p:identity> <p:add-attribute <p:with-option <p:pipe </p:with-option> </p:add-attribute> <p:add-attribute <p:with-option </p:add-attribute> <p:add-attribute <p:with-option </p:add-attribute> <p:insert <p:input <p:pipe </p:input> </p:insert> <p:unwrap </p:pipeline>
There's nothing wrong with this pipeline, but it requires several steps to accomplish with the pipeline author probably considers a single operation. What's more, the result of these steps is not immediately obvious on casual inspection.
In order to make this simple construction case both literally and conceptually simpler, this note introduces two new XProc steps in the XProc namespace. Support for these steps is optional, but we strongly encourage implementors to provide them.
The new steps are p:in-scope-names and p:template. Taken together, they greatly simplify the pipeline:
<p:pipeline xmlns: <p:option <p:option <p:in-scope-names <p:template> <p:input <p:inline> <c:request { /doc/request/node() } </c:request> </p:inline> </p:input> <p:input <p:pipe </p:input> <p:input <p:pipe </p:input> </p:template> </p:pipeline>
The p:in-scope-names step provides all of the in-scope options and variables in a c:param-set (this operation is exactly analagous to what the p:parameters step does, except that it operates on the options and variables instead of on parameters).
The p:template step searches for XPath expressions, delimited by curly braces, in a template document and replaces each with the result of evaluating the expression. All of the parameters passed to the p:template step are available as in-scope variable names when evaluating each XPath expression.
Where the expressions occur in attribute values, their string value is used. Where they appear in text content, their node values are used.
In this note the words must, must not, should, should not, may and recommended are to be interpreted as described in [RFC 2119].
The p:in-scope-names step exposes all of the in-scope variables and options as a set of parameters in a c:param-set document.
<p:declare-step
type
="
p:in-scope-names
"
>
<p:output
port
="
result
"
primary
="
false
"
/>
</p:declare-step>
Each in-scope variable and option is converted into a c:param element. The resulting c:param elements are wrapped in a c:param-set and the parameter set document is written to the result port. The order in which c:param elements occur in the c:param-set is implementation-dependent.
For consistency and user convenience, if any of the variables or options have names that are in a namespace, the namespace attribute on the c:param element must be used. Each name must be an NCName.
The base URI of the output document is the URI of the pipeline document that contains the step.
For consistency with the p:parameters step, the result port is not primary.
This unlikely pipeline demonstrates the behavior of p:in-scope-names:
<p:declare-step xmlns: <p:output <p:pipe </p:output> <p:option <p:option <p:variable <p:in-scope-names </p:declare-step>
Assuming the values supplied for the username and password options are “user” and “pass”, respectively, the output would be:
<c:param-set xmlns: <c:param <c:param <c:param </c:param-set>
The p:template replaces each XPath expression, delimited with curly braces, in the template document with the result of evaluating that expression.
<p:declare-step
type
="
p:template
"
>
<p:input
port
="
template
"
/>
<p:input
port
="
source
"
sequence
="
true
"
primary
="
true
"
/>
<p:input
port
="
parameters
"
kind
="
parameter
"
/>
<p:output
port
="
result
"
/>
</p:declare-step>
While evaluating each expression, the names of any parameters passed to the step are available as variable values in the XPath dynamic context.
The step searches for XPath expressions in attribute values, text content (adjacent text nodes, if they occur in the data model, must be coalesced; this step always processes maximal length text nodes), processing instruction data, and comments. XPath expressions are identified by curly braces, similar to attribute value templates in XSLT or enclosed expressions in XQuery.
In order to allow curly braces to appear literally in content, they can be escaped by doubling them. In other words, where “{” would start an XPath expression, “{{” is simply a single, literal opening curly brace. The same applies for closing curly braces.
Inside an XPath expression, strings quoted by single (') or double (") quotes are treated literally. Outside of quoted text, it is an error for an opening curly brace to occur. A closing curly brace ends the XPath expression (whether or not it is followed immediately by another closing curly brace).
These parsing rules can be described by the following algorithm, though implementations are by no means required to implement the parsing in exactly this way, provided that they achieve the same results.
The parser begins in regular-mode at the start of each unit of content where expansion may occur. In regular-mode:
“{{” is replaced by a single “{”.
“}}” is replaced by a single “}”.
Note: It is
a dynamic error (
err:XC0067) to
encounter a single closing curly brace “}”
that is not immediately followed by another closing curly
brace.
A single opening curly brace “{” (not immediately followed by another opening curly brace) is discarded and the parser moves into xpath-mode. The inital expression is empty.
All other characters are copied without change.
In xpath-mode:
It is a
dynamic error (
err:XC0067) to
encounter an opening curly brace “{”.
A closing curly brace “}” is discarded and ends the expression. The expression is evaluated and the result of that evaluation is copied to the output. The parser returns to regular-mode.
Note: Braces cannot be escaped by doubling them in xpath-mode.
A single quote (') is added to the current expression and the parser moves to single-quote-mode.
A double quote (") is added to the current expression and the parser moves to double-quote-mode.
All other characters are appended to the current expression.
In single-quote-mode:
A single quote (') is added to the current expression and the parser moves to xpath-mode.
All other characters are appended to the current expression.
In double-quote-mode:
A double quote (") is added to the current expression and the parser moves to xpath-mode.
All other characters are appended to the current expression.
It is a
dynamic error (
err:XC0067) if the
parser reaches the end of the unit of content and it is not in
regular-mode.
The context node used for each expression is the document passed
on the source port. It is a dynamic error (
err:XC0068) if more
than one document appears on the source port.
In an XPath 1.0 implementation, if p:empty is given or implied on the source port, an empty document node is used as the
context node. In an XPath 2.0 implementation, the context item is
undefined. It
is a dynamic error (
err:XC0026) if any
XPath expression makes reference to the context node, size, or
position when the context item is undefined.
In an attribute value, processing instruction, or comment, the string value of the XPath expression is used. In text content, an expression that selects nodes will cause those nodes to be copied into the template document.
Depending on which version of XPath an implementation supports, and possibly on the xpath-version setting on the p:template, some implementations may report errors, or different results, than other implementations in those cases where the interpretation of an XPath expression differs between the versions of XPath.
An example of p:document appears in Section 1, “Introduction”.
[RFC 2119] Key words for use in RFCs to Indicate Requirement Levels. S. Bradner. Network Working Group, IETF, Mar 1997.
[XProc: An XML Pipeline Language] XML: An XML Pipeline Language. Norman Walsh, Alex Milowski, and Henry S. Thompson, editors. W3C Recommedation 11 May 2010.
The following dynamic errors are explicitly called out in this note.
err:XC0026
It is a dynamic error if any XPath expression makes reference to the context node, size, or position when the context item is undefined.
See: p:template
err:XC0067
It is a dynamic error to encounter a single closing curly brace “}” that is not immediately followed by another closing curly brace.
See: p:template, p:template, p:template
err:XC0068
It is a dynamic error if more than one document appears on the source port.
See: p:template
Other errors may also arise, see [XProc: An XML Pipeline Language] for a complete discussion of error codes. | http://www.w3.org/TR/xproc-template/ | CC-MAIN-2017-34 | refinedweb | 1,697 | 53 |
Adding.
/*
Arduino Based Music Player
This example shows how to play three songs from SD card by pressing a push button
The circuit:
* Push Button on pin 2 and 3
* Audio Out - pin 9
* SD card attached to SPI bus as follows:
** MOSI - pin 11
** MISO - pin 12
** CLK - pin 13
** CS - pin 4
created 25 Jun 2017
by Aswinth Raj
This example code was created for CircuitDigest.com
*/
#include "SD.h" //Lib to read SD card
#include "TMRpcm.h" //Lib to play auido
#include "SPI.h" //SPI lib for SD card
#define SD_ChipSelectPin 4 //Chip select is pin number 4
TMRpcm music; //Lib object is named "music"
int song_number=0;
boolean debounce1=true;
boolean debounce2=true;
boolean play_pause;
void setup(){
music.speakerPin = 9; //Auido out on pin 9
Serial.begin(9600); //Serial Com for debugging
if (!SD.begin(SD_ChipSelectPin)) {
Serial.println("SD fail");
return;
}
pinMode(2, INPUT_PULLUP); //Button 1 with internal pull up to chage track
pinMode(3, INPUT_PULLUP); //Button 2 with internal pull up to play/pause
pinMode(3, INPUT_PULLUP); //Button 2 with internal pull up to fast forward
music.setVolume(5); // 0 to 7. Set volume level
music.quality(1); // Set 1 for 2x oversampling Set 0 for normal
//music.volume(0); // 1(up) or 0(down) to control volume
//music.play("filename",30); plays a file starting at 30 seconds into the track
}
void loop()
{
if (digitalRead(2)==LOW && debounce1 == true) //Button 1 Pressed
{
song_number++;
if (song_number==5)
{song_number=1;}
debounce1=false;
Serial.println("KEY PRESSED");
Serial.print("song_number=");
Serial.println(song_number);
if (song_number ==1)
{music.play("1.wav",10);} //Play song 1 from 10th second
if (song_number ==2)
{music.play("2.wav",33);} //Play song 2 from 33rd second
if (song_number ==3)
{music.play("3.wav");} //Play song 3 from start
if (song_number ==4)
{music.play("4.wav",25);} //Play song 4 from 25th second
if (digitalRead(3)==LOW && debounce2 == true) //Button 2 Pressed
{
music.pause(); Serial.println("PLAY / PAUSE");
debounce2=false;
}
if (digitalRead(2)==HIGH) //Avoid debounce
debounce1=true;
if (digitalRead(3)==HIGH)//Avoid debounce
debounce2=true;
}
}
Aug 22, 2017
Hi,
Just curious what speaker you are using for this project? It sounds very nice.
Pat
Aug 23, 2017
Hi Pat,
This is a normal 8 ohm speaker. There is nothing special with the speaker, because I have tried it with other speakers and was able to achive the same qulaity.
Thanks
Aug 30, 2017
The part list says you need a 10uF and 100uF capacitor but the diagram shows 1uF and 10uF in the circuit....which is the correct combination?
Sep 01, 2017
Hi Pat,
I have tested the circuit will both the combinations and they worked fine. You can use either one and the performance will be the same.
Sep 04, 2017
So I use the 100uF instead of the 10uF and the 10uF in stead of the 1uF?
Sep 04, 2017
yes
Sep 02, 2017
hey, I see in your fritzing pic you put the ground in the + line and the + in the ground line, but the speaker's black wire is connected to the breadbard's blue line (where is connected 5V) and the red wire to the red line (where is connected gnd). Where should I connect amplifier pins?
Sep 03, 2017
Yes the firtzing pic has a small representation problem. But, the connections are correct and will work as expected.
I have mistakenly swapped the positive and ground rails (representation) of the breadboard with the actual positive and ground rails of the circuit. This will not be a problem since the logic of the connection remais the same.but the speaker's black wire is connected to the breadbard's blue line (where is connected 5V) and the red wire to the red line (where is connected gnd).
No, You have completely misunderstood the circuit. The upper positive and ground rails are only powered by +5V and ground. The lower potive and ground rails have not other connection other than the speaker itself.
You can know more about audio amplifier circuit from here.
Sep 11, 2017
Hi I had one other comment but it has not posted yet so please disregard it. I'm getting the following error how should I resolve it? /Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino: In function 'void loop()':
/Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:54:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("1.wav",10);} //Play song 1 from 10th second
^
/Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:56:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("2.wav",33);} //Play song 2 from 33rd second
^
/Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:58:22: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("3.wav");} //Play song 3 from start
^
/Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:60:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("4.wav",25);} //Play song 4 from 25th second }
^
Sketch uses 12676 bytes (39%) of program storage space. Maximum is 32256 bytes.
Global variables use 1118 bytes (54%) of dynamic memory, leaving 930 bytes for local variables. Maximum is 2048 bytes.
avrdude: ser_open(): can't open device "COM1": No such file or directory
Problem uploading to board. See for suggestions.
Sep 13, 2017
Hi Miles, The above problem is not because of the program but because of your PORT settings. As the error states "can't open device "COM1": No such file or directory" Meaning you have either not selected the correct COM port or have not connected Arduino to PC properly. Try uploading a Blink program to verify your settings before trying this program. Thanks!
Sep 21, 2017
I have the same problem ,, have you solved it??
Sep 22, 2017
Have you tried the above solution?
Sep 21, 2017
Hey!! can you please update this project with arduino mega board without the buttons which you've used in this circuit.Also if I removed the buttons is it possible to play multiple files ?
Sep 29, 2017
Will this work without the LM386 and go direct to head phones?
Sep 29, 2017
Yes Luke, LM386 circuit is only for the speaker
Oct 09, 2017
Hi,
I was able to purchase pololu microSD card breakout board with a 3.3V regulator and level shifters and I am getting error (SD Fail)
Is it related to Module vendor ?
Best regards
Noam
Oct 09, 2017
So you have a 3.3V module with 5V level shifter right?
Make sure the level shifter is working by manually measuirng voltage.
The can be either because of your connections or because of your module
Oct 11, 2017
code eroor
Oct 11, 2017
What error did you get rami?
Oct 13, 2017
Hi. i got this error /Users/milesrichie/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino: In function 'void loop()':
/Users/fitrihanun/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:54:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("1.wav",10);} //Play song 1 from 10th second
^
/Users/fitrihanun/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:56:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("2.wav",33);} //Play song 2 from 33rd second
^
/Users/fitrihanun/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:58:22: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("3.wav");} //Play song 3 from start
^
/Users/fitrihanun/Documents/Arduino/sketch_sep10a/sketch_sep10a.ino:60:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("4.wav",25);} //Play song 4 from 25th second }
^
Mar 15, 2018
Do you know how to solve it?
Oct 15, 2017
hI,can i use this library with atmega 8 mcu.I tried it but it shows some error.like "exit status 1
Error compiling for board Arduino NG or older."
Oct 16, 2017
Sorry ANAND,
The libraries will not support Atmega8, you should have to upgrade your hardware for this project
Oct 23, 2017
Mr Raj. it was a wonderful example. I am thinking to revise the code, but this time using only a single button to turn on and off the music. Here is what i want, im going to used a toggle switch ( on and off). When the button is closed(permanently closed) ,the music will play. When the button is press again the button will be open and the music will stop. However, i wanted to play the music from start again when the button is closed again. Can you help me on this. many thanks
Oct 25, 2017
Hello,
Has anyone tried playing the wav clip stored in onboard flash, without the sd card?
Thanks
Oct 25, 2017
Hi Bitman,
I have not tried it yet. But yes it should work even without a SD card module
Nov 07, 2017
Hi, I am looking to build something like this, but with an option to play one of four .wav files. Is it fairly simple to instead have four buttons to play an audio track, and another to stop. Pressing of any button halting any other .wav file and playing just the programmed .wav file?
Would appreciate any help as it is for a non-profit group that I help out
Nov 21, 2017
Sir, Will you please give me a sample coding for arduino program to play the audio files one by one at exact pre- scheduled time interval and switch off as the song ends.
Nov 30, 2017
hi i think ive wired everything correctly but no sound is coming out except an occasional tap? and my lm386 chip is getting really hot.
im trying to make it so that i can move the buttons off away from the breadboard if possible.
Nov 30, 2017
update the heating problem is fixed but no music is playing when i press the buttons only a tap. checked sd card is being read ok and files are found but still cant get music to play. any suggestions would be helpful
Nov 30, 2017
ok final update before my head explodes!
im getting a warning about the files when i compile which reads;
warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("1.wav");}
repeats for other files
ive looked on the forums and responses just make me more confused as i dont know how to apply the changes they suggest out of the context theyre suggested in. (if that makes sense)
something along the lines of making the song files constants?
could this be why my files wont play?
Dec 04, 2017
I not sure how you ended up on his error. But was the code working for you properly the first time. I assume the Amplifier was heating mostly because of a wrong connection. When you rectified the correction the IC might have been completely burnt. So try replacing the IC, incase if you dont have one ignore the amplifier part they will still work just fine but sound will be low. I tried compiling the same code given here, yet dint get the error you are speaking about
Jan 04, 2018
Hi,
How would you connect a speaker using bluetooth module HC06? I am unsure of how to do this.
Thanks :)
Jan 10, 2018
Hi. How is it possible to for arduino to give smooth analog signal for music since it can only produce pwm? Can you answer me please. I am confused.
Jan 11, 2018
It is because of the library
TMRpcm.h
This produces analog signal to play music. You have dive deep into how the library works to understand how the signal is produced.
Jan 13, 2018
Hello Mr. Raj.
I was successful in doing the same. However, everytime i press the buttton to play the music i can a loud ticking sound in which i believed coming from the switch. Is there anyway i can eliminate this "click" sound.
Thanks
Jan 15, 2018
Yes you can add a delay after every time the arduino reads the switch
Jan 18, 2018
I am use the headset if u need any amplification
Jan 23, 2018
No i don't think you will need amplification for Headphones. Let me know how it turns out !
Jan 19, 2018
Default test file provided in library plays well.But another file not playes properly even i have convered as per your suggestion like
1. into .wav
2.sample :1600Hz
3.Channel :Mono
4.Bits 8
Advance pcm format :PCM unsigned 8 bit
Jan 23, 2018
Not play properly in the sense? Are you getting wired sounds? or is it just mute?
Jan 24, 2018
hello sir my name is awan first sorry to interrupt your time, what kind a speaker when you use in this project because from this tutorial I can't rise up my volume and noise everywhere when I play the music
thanks
Jan 24, 2018
You can use any speaker, you can also use your computer speakers.
Try with a simple speaker first, if you are satisfied remove the small speaker and replace it with a bigger one. The model of speaker will not affect the working of the circuit. All the best
Jan 27, 2018
SD fail
KEY PRESSED
song_number=1
PLAY / PAUSE
(in serial monitor)
the memory card is ok and formatted also.
I am getting this errors
please reply for me as soon as possible
Jan 30, 2018
These are not errors, this means that your buttons are not wired properly check your connections
Feb 01, 2018
hey thanks, but i have a problem ....... when i push the push button first time , track 1 play but when i push it the second ,track 2 didn't play please tell me why and thank you
Feb 05, 2018
Sounds odd, it should have played. Make sure the button is working properly
Jul 08, 2018
The code has a "}" in the wrong place. Move the second to last "}" above the line "if (digitalRead(3)==LOW && debounce2 == true) //Button 2 Pressed"
Feb 06, 2018
Can you specify where the error is in the fritzing diagram? I am trying to build this circuit based off the fritzing, but earlier comments make is seem like the fritzing is inaccurate.
Feb 07, 2018
You can build it, if you have any problem use the forum. The circuit diagram is good
Feb 12, 2018
i have a problem when i am trying to change track it will not work when i press again the button
Feb 14, 2018
It plays only noise .No playback.what can be the error??
Feb 15, 2018
Hi,
How would you connect a speaker using bluetooth module HC06? I am unsure of how to do this.
Thanks :)
Feb 18, 2018
Hi, thanks for posting this project. I have a question. What's the purpose of putting a 10k resistor in series with a 10uF capacitor to ground at the output? Usually a low resistance (10ohm) and low capacitance (0.05uF) are used in that manner to filter out high frequencies. I think that way you filter out also desired frequencies. Let me know please. Thank you.
Feb 20, 2018
Can you please CLEARLY specify where the errors are in the breadboard diagram? I am having trouble building the circuit and would like to use the breadboard diagram instead of the schematic. Are there any errors in the schematic, or can I build safely off of that?
Feb 20, 2018
Hi,
You have done a very good job. I am also doing a project like this. I want to add one more button for stopping the music. I tried with the "audio.stopPlayback(); " function . But is not working properly. Do have any idea how to fix this?
Say, in your current project if you add one more button then what will be the code for stopping the music?
Mar 01, 2018
Hi. We are currently working on a project to convert ASL into normal speech using Arduino mega 2560. The code and the connections are exactly as given on various sites (CS pin 53 on mega), but at each time it is failing to initialise. Please help us if possible! Thank You
Mar 01, 2018
Which code and circuit are you using? I used the one given here and dint find any problem? What error are you facing? What do you mean by failing to initialise?
Mar 02, 2018
I connected everything and put code on arduino but it doesnt play the music. You can just hear some noise, buttons do nothing, Whats the problem?
Mar 03, 2018
hlo, I have tried all the things u have mentioned in ur list. But v r unable to get the audio from the speaker. We hve used a 3ohm speaker. We are getting mgs on monitor screen as"KEY PRESSED
song_number=1". And no any sound is heard. Can u plz help us out wdt this issue.
Mar 05, 2018
If you are getting this message it means the code is working properly. The problem is most likely on your hardware share a pic of your set-up that might help to figure out the problem
Mar 09, 2018
We have replaced the amplifier circuit with the lm386 module directly. Can you please share the pin connection diagram. Because yet we are getting the same message.
Mar 12, 2018
is it possible to use a lm358 istead of lm386
Mar 13, 2018
Yes you can also use a lm358 in place of lm386. The circuit will vary a bit, google for LM358 amplifier circuits and you will get one.
Also you can make this project even without the amplifier part
Mar 15, 2018
How much Arduino microcontroller memory does this project use?
Mar 16, 2018
Just complie the program on the Arduino IDE and at the end of compilation read the logs and you will find the information your looking for
Mar 28, 2018
C:\Users\Acer\Documents\Arduino\sketch_mar21a\sketch_mar21a.ino: In function 'void loop()':
C:\Users\Acer\Documents\Arduino\sketch_mar21a\sketch_mar21a.ino:66:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("1.wav",10);} //Play song 1 from 10th second
^
C:\Users\Acer\Documents\Arduino\sketch_mar21a\sketch_mar21a.ino:69:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("2.wav",33);} //Play song 2 from 33rd second
^
C:\Users\Acer\Documents\Arduino\sketch_mar21a\sketch_mar21a.ino:72:22: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("3.wav");} //Play song 3 from start
^
C:\Users\Acer\Documents\Arduino\sketch_mar21a\sketch_mar21a.ino:75:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
{music.play("4.wav",25);} //Play song 4 from 25th second
^
Mar 31, 2018
Mr Raj, Thank you very much for this project. I found this is very useful and cost saving (no buying mp3 shield), I will build mini music player for my kids.
thank you :)
Apr 06, 2018
Hey, there is something not clear in the circuit. Are pins 2 and 4 of the amplifier connected to the positive pin of the speaker? Because in the arduino picture it's not showing
Apr 12, 2018
Hey, thanks a lot for the project! However, I am having trouble when the music starts playing, sometimes songs just start fast forwarding alone and skipping parts. Also the the code just stops sometimes when the skipping gets really bad. Is this a problem with the wires? or the sampling of my wav files?
Apr 25, 2018
Hi! Can I use a PAM8403 to amplify the sound?
Apr 25, 2018
hello, i just want to us if it is possible to control the speed of the stepper motor using the music mp3 plays? and how if it is possible, i need that on our project, i just want to know if it is possible because our professor want our project music box to control the rotation of the balerina through the beat of music, for example the music is upbeat therefore the balerina rotates fast and if the music is slow, the balerina rotates slowly. Thank you.
Apr 27, 2018
Does the capacitor have to be 25 V or any other voltage
Apr 27, 2018
Hello,
I want to attach HC05 Bluetooth to control song & servo with mobile. So what is code??
May 13, 2018
Hi, can you tell me how to increase the volume through hardware.can i do it either by changing capacitors or increase power to the circuit or any other ways you know? thanks!
May 16, 2018
Increasing the operating voltage of the Op-amp will help. If you are looking for more volume use a more powerful audio amplifier circuit
May 20, 2018
The errors occurred in the code given above is due to music.play() function typecasting is required in this function
music.play("4.wav",25);
Corrected function argument is:
music.play((char *)"4.wav",25);
May 22, 2018
can i use LM358/LM741 instead of LM386?
May 28, 2018
Yes cyrille you can also use LM358/LM741 as an amplifier but LM386 has a high gain and will work perfect for an audio amplifier. But still LM358/741 will also work
Jun 05, 2018
I did everything on this page my professor checked it and said its good but when i power it on nothing happens. 05, 2018
what are the alternative capacitors i can use for 1uF 25v capacitor?
Jun 06, 2018
Use 10uF of any voltage value
Jun 10, 2018
hi, this is good, but how can I change the buttons to have one play one track and the other to play all. thanks for your help
Jun 13, 2018
hey my friend, anyone have bips sounds? when um push the bottom i have only the bips.
Jun 14, 2018
Hi, its not entirely clear to me regarding the connections to the push buttons. Do both buttons have a connection to GND and then to the pins 2 and 3 respectively? So the idea is that pulling those pins to GND will trigger the appropriate function?
Thanks
Jun 18, 2018
Yes TIM you are correct
Jun 23, 2018
hi, I have a built audio amplifer. the Haljia LM386. it has 2 gnd one in and one VDD. which wires to I connect in this connection
Pages | https://circuitdigest.com/microcontroller-projects/arduino-audio-music-player | CC-MAIN-2019-26 | refinedweb | 3,724 | 73.58 |
that takes an integer N and simulates the motion of a random walker
for N steps. After each step, I am required to print the location of
the random walker, treating the origin position as(0,0).
Finally it asks to print the square of the distance from origin to the final point.
Although I have worked a little on this java problem. I cant seem to figure out
a method to print its location /final position after N step.
My code is here:
public class math { public static void main(String[] args) { int N= Integer.parseInt(args[0]); int M=Integer.parseInt(agrs[1]); for (int t = 0; t < N; t++) { int x = 0, y = 0; int[]a= new int[x][y]; // repeatedly take a random step, unless you've already escaped while (x >= -M && x <= M && y >= M && y <= M) { double r = Math.random(); if (r < 0.25) x++; else if (r < 0.50) x--; else if (r < 0.75) y++; else if (r < 1.00) y--; a[x][y]= true;
I would be extremely extremely grateful of your help. Please note this is not a homework asssignment.
This problem is from a book that I am using to learn java over summers. Thanks
Kind Regards,
Ahsan
*** Edit ***
Please use code tags when posting code.
This post has been edited by GunnerInc: 29 July 2012 - 08:50 AM
Reason for edit:: Added code tags | http://www.dreamincode.net/forums/topic/287336-random-walk-java-problem/ | CC-MAIN-2016-40 | refinedweb | 236 | 75.61 |
FOR IMMEDIATE RELEASE
Friday, April 23, 2004
Contact: Rich Eggleston
(608) 257-5881
TABOR: Still A Bad Idea
Rep. Frank Lasee (R-Ledgeview) is working
hard to clear the land mines that his proposed constitutional amendment, known by the
seductive nickname "The Taxpayer's Bill of Rights," would place between
Wisconsin citizens and their ability to obtain police and fire protection for their homes,
an education for their children and help for their elderly parents and grandparents.
Rep. Lasee has responded to the most obvious
and significant flaws in Colorado's TABOR amendment. But TABOR is still a very bad idea.
We shouldn't put fiscal policy in the
constitution. There isn't a formula in state law that isn't changed with regularity. Put
an untested formula into the constitution, and it takes many, many years to change. Rep.
Lasee has dealt with problems that we have identified, but as sure as the sun will rise
tomorrow, there are other problems we haven't dreamed of. That's why the constitution
should be the place for basic principles about human rights, and basic rules to guide
government, not micromanagement.
We shouldn't rush something like TABOR
through the legislative process without full and informed debate and discussion. So far,
the debate and discussion has been taking place behind closed doors. Legislators slam the
door in the face of reporters who try to get into these meetings. Assembly Republicans are
meeting Tuesday on this issue. Every reporter in Madison should attend, and they shouldn't
leave except in handcuffs.
It's a bad idea whose time has passed.
State and local officials have heard the message of voters that taxes are high
enough. Gov. Jim Doyle signed a no-tax-increase budget. Local officials across the
state adopted a levy freeze as part of their budgets. Municipal levy increases this year
were the lowest in 14 years, and school district levy increases were the lowest in a
decade. Government is slimming down in response to the public. Government will continue to
respond to the public. That's its job.
The Wisconsin Alliance of Cities is dedicated
to making local government work better and more efficiently. We're dedicated to reform
that saves the taxpayer money. We're dedicated to regional solutions that save the
taxpayer money. We're dedicated to eliminating duplicative layers of government that waste
the taxpayer's money.
We come up with original ideas, we don't
import dysfunctional ideas from other states. Professor Don Kettl's Blue-Ribbon Commisson
on the State-Local Partnership for the 21st Century came up with 139 ideas for improving
and slimming down government. TABOR wasn't one of them. Once upon a time Wisconsin
embraced homegrown ideas like Unemployment Compensation and Worker's Compensation and
exported them to the rest of the country. If we embrace ideas like the Wisconsin Alliance
of Cities and Professor Kettl's, not TABOR, we will truly prepare Wisconsin to thrive in
the 21st Century. | http://www.wiscities.org/TABORsubreax.htm | crawl-002 | refinedweb | 500 | 55.95 |
I'm adding a ViewModels subfolder to an MVC2 solution for the partial classes and metadata classes that proxy my LINQ to SQL classes (which are in the Models folder). Unfortunately, the ViewModels namespace makes the partial classes no longer map to their Models class counterparts...
I'm keeping my 'service' or 'repository' classes that provide unit testable data access methods in the Models folder.
I can imagine taking the contents of the Models folder to a WCF web service in the future ~
I'd rather not have a seperate set of classes for the ViewModels to proxy the Model classes, but maybe I'm just being lazy.
How else would you go about it?
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | http://channel9.msdn.com/Forums/Coffeehouse/545633-MVC2-with-MVVM-over-LINQ-to-SQL-classes | crawl-003 | refinedweb | 150 | 63.83 |
Help us beat last year's record of $7100!
Awesome! We are half way to our goal!
Talk:Bible
This article is marked (in my head) for MAJOR improvement. It will (hopefully) include origins of religious texts, etc. If I'm not too stoned on Vicodin after my operation, I'll try to finish it then.--PalMD-yada yada 17:43, 2 June 2007 (CDT)
- They gonna do a vasectomy while you're out? --Kels 18:28, 2 June 2007 (CDT)
- I told them not to listen to anything my wife says until im awake.--PalMD-yada yada 22:45, 2 June 2007 (CDT)
[edit] Bible as an idol
I've added this section, which I invite other's to expand (or delete if you so desire); even within modern evangelical circles there is a growing awareness that the Bible, (while still being the focus within Christendom), needs to be reined in a bit. In the gospels it mentions how the orthodox Jews of Jesus' day scorned Him and his followers for "doing what is unlawful on the Sabbath"; Jesus replied that the "sabbath was made for man, not man for the sabbath". This same logic can be applied to the Bible. ~~ CЯacke® 11:22, 6 July 2007 (CDT)
[edit] Christian thought or Protestant thought
Should the "The Bible in Christian Thought" section be retitled? My understanding (and I'm a heathen, remember, but have had my share of religious schooling) is that Catholics, the Orthodox and to some extent Anglicans/Episcopalians believe in in the supremacy of the teachings of the apostolic succession (ie, whatever bishops they recognize) over any individual interpretation of the Bible. And most of the maybe-Christian new religious movements (Mormonism, Jehovah's Witnesses, Christian Scientists) accept some recent prophet or prophecy as superseding, or at least adding to, the truth in the Bible. --jtltalk 21:52, 7 July 2007 (CDT)
- Yup, that section could simply be better. I wrote the whole books crap part, cause I "know" it. But I don't give a shit what various sects actually say. They're all equally loony, to me, although some of them are nice folks. humanbe in 22:54, 7 July 2007 (CDT)
[edit] 11!!!111!111
Damn, I do sometimes write well, if I do say so myself! Nice work on that OT/NT/Canon section, punk! humanbe in 23:05, 7 July 2007 (CDT)
- Um, thank you? Are you drunk? humanbe in 23:05, 7 July 2007 (CDT)
[edit] Bible texts
This article needs three things: a Hebrew Tanakh that is both accurate and hosted by someone who is theologically neutral, a Greek New Testament that meets the same requirements, and an English translation that is both highly accurate and readily requotable. (Hey, anyone want to create a New Atheist and Humanist Translation, both mercilessly accurate and more-or-less public domain?) EVDebs 22:58, 1 September 2007 (CDT)
- I can't read Greek or Hebrew. Or "English", whatever the hell that is. humanbe in 23:02, 1 September 2007 (CDT)
- I can't either, but it seems a very wise idea for those who do. EVDebs 00:42, 2 September 2007 (CDT)
- Um. You're not seriously suggesting that we start re-translating the Bible, are you?
- As for the originals in Hebrew and Greek, they can be found here. --AKjeldsenGodspeed! 04:57, 2 September 2007 (CDT)
- Could we have a quick show of hands for those who feel comfortable working in these languages? I certainly claim no expertise and I rather think that this suggestion is presently a little beyond our scope and not really within our objectives. But hey! if you get the volunteers!--Bob_M (talk) 05:06, 2 September 2007 (CDT)
- Well, I don't mind coordinating the effort if I get the resources. I estimate I'll need about 2.5 million dollars and maybe four years for the initial work. --AKjeldsenGodspeed! 05:12, 2 September 2007 (CDT)
- That's it then. We only need to get lots of people clicking the "Make a Donation" box and we're on our way.--Bob_M (talk) 05:22, 2 September 2007 (CDT)
- Alternatively, this might help, but it's up in 2009, . Also, I've found that when translating, the 'be as literal as possible' style is quite hard - you either end up breaking the prose into a syntax-less string of words, or have to add approximately 10x that which you translated in annotations. It's actually easier to learn Greek. кαι απολαυστιστος εςτι. To Bob M - it would broaden our remit, but it could help. I'm busy with something similar here. -- מְתֻרְגְּמָן וִיקִי שְׁלֹום!
- I've had a look at it but I'm a bit confused. What exactly are you doing there?--Bob_M (talk) 09:00, 2 September 2007 (CDT)
- Creating a useful bank of Bible quotes from various sources to ward off fundamentalists. Check out the wp:King-James-Only Movement. -- מְתֻרְגְּמָן וִיקִי שְׁלֹום!
- No, I'm not seriously suggesting it -- it was really just a joke. I do think such a thing (i.e. a translation that is beholden only to the text and not any dogma) is needed, but I think the important thing right now with RationalWiki is to make it just that -- a Wiki for rationalists. If I understand correctly the Fox Torah and the Lattimore NT are actually good starts on that sort of thing, even if they're more copyrighted than I'd like. EVDebs 16:50, 2 September 2007 (CDT)
- A "translation beholden only to the text" sounds like a good idea in practice, but is usually not possible, especially when dealing with such a complex material as the Bible. The problem is that a translation is always also an interpretation. Sure, you could do a Bible translation from a rationalist viewpoint, or an atheist one, or feminist or whatever, and that has probably been done a few times - there are a lot of versions out there. However, one shouldn't believe that they are any more objective or anything than the 'classic' versions, they're just interpreted differently. --AKjeldsenGodspeed! 17:40, 2 September 2007 (CDT)
[edit] Idea
I have always had an interest in the Bible and the study of it, especially from a non-fundie pov. There are a gazillion sites on this sort of thing, but why not do some here? There is no reason we can't take a "rational" look at the bible and various related issues. So far, we only have 2 or 3 people who have a decent grounding in the subject. Also, I do get concerned that this whole thing might upset my fellow atheists, but we'll deal with it.--PalMD-Ars longa, vita brevis 09:03, 2 September 2007 (CDT)
- Absolutely. We do have some good stuff already, but obviously there's so much more to do. Any suggestions on which subjects that should receive particular attention? I for one can sometimes get too mired down in details to see the big picture. --AKjeldsenGodspeed! 17:44, 2 September 2007 (CDT)
[edit] edit conflict & idea
Hey Eugene (VDebs), sorry if my minor proofreading edit clashed with you building actual text, I hope it wasn't too hard to save your work.
Idea: the third chunk of this edit looks like a pretty strong refutation of any biblical literacy foolishness - basically if experts who study and translate from the original have to make a zillion notes as how they made decisions (and of course, don;t agree with each other), how is a relatively lay person supposed to somehow adhere to inerrrancy and a literal interpretation. Just thought it was good food for the worms here. humanbe in 02:31, 5 September 2007 (CDT)
[edit] redlinks
Some are looking a bit obscure... and there's no need to link to WP, esp. in a header. People know it's there. humanbe in 21:07, 12 September 2007 (MDT)
- I very much disagree on the Wikipedia links -- ultimately it's a usability issue. Quite simply, Wikipedia usually has much more information and many more references on a given subject than we can provide, so it makes more sense to me to provide one-click access to Wikipedia for anything that either a) we haven't covered yet or b) is relevant but outside the scope of RationalWiki. There are certain things we try to do well (analysis and snark, mostly) and certain things Wikipedia definitely does well (like providing firehoses of information on a subject). Taking advantage of the wp: tags to provide a more-or-less seamless interface to that information seems a nobrainer. EVDebs 21:21, 12 September 2007 (MDT)
- I still disagree. We want people to stay here, not run off to WP. And as I said, everyone knows WP exists. What we wind up with is files full of "links" that leave the site. We are not a linkfarm to WP. They should be one for us! I have no problem, however, with WP links in an "external links" section, that are identified as such. Oh, and how about the red links? As I said, some are unlikely to ever be articles here. humanbe in 23:26, 12 September 2007 (MDT)
[edit] History (and science) section
I seem to recall some speculation that the Bible was wrong when it mentioned the "Hittites" as a nation/state since secular history and science were mute on there being such a people...for a long time. But evidence for there being a "Hittite" people emerged from secular history and archeology. Alas I do not recall where or when such evidence came to light. CЯacke®
[edit] A snarky observation
I've read an awful lot of religious literature, from Snorri's Eddas through to the Ramayana, and the one thing that strikes me is that most of them have at least some sense of decent narrative, some sense that boring the reader to death isn't the done thing. The Old Testament on the other hand, is a tedious tract mostly consisting of irrelevant bollocks. Quite why we need to know that foo begat bar, or that jibble were the years of baz is a mystery to me. This leads me to the conclusion that if I had to make a bet, I reckon I'd put my money on Judaism as The One True ReligionTM, because either the Torah was written by a right bunch of boring wankers, or it was inspired by a god who wasn't really interested in entertaining us. --Jeєv☭sYour signature gave me epilepsy... 15:27, 21 November 2007 (EST)
[edit] Awesome thing!
Dis is teh awsum. Kitteh say so. -Master Bra'tacKree! 00:50, 6 January 2008 (EST)
[edit] Dan
This is blatant censorship of Dan. he deserves credit for his thoughtful analysis. Perhaps its not censorship at all, but I still think the ACLU should get involved. DoggedamesP 00:34, 12 February 2008 (EST)
- Hey, I didn't censor it. But our sorta general protocol says we should link first-person edits in articles (ie, "one RW luser says..." Since "Dan" is an unidentified editor (so far, please, Dan, if you're here, you can fix this), he is for now just "some IP". That's not censorship, it's attribution. We let him have his say, didn;t we? human
00:39, 12 February 2008 (EST)
- Oh, and I tried to dig through the diffs to see if it was a "signed in" editor so I could attribute it, but, well, that got old real fast. You are more than welcome to find the originator, if you can. human
00:40, 12 February 2008 (EST)
- Why do you have a section of the article that is the opinion of a random IP editor? Not that you shouldn't, its just a little odd. DoggedamesP 00:44, 12 February 2008 (EST)
- We do odd things like that sometimes. You will often see relatively first-person interjections in article, linked to the opionate's user page. Even if "Dan" was just an IP (and of course, we all are, behind our user names), what was added was interesting, so it got left in. If Dan does happen to be an editor here and we can figure it out, I'd love to see it corrected back and linked. I can only find a "DanH", who never edited this article as far as I can tell. I suppose one of us could dig through the diffs and find the author if we were motivated enough... human
14:49, 12 February 2008 (EST)
- I take issue at that: you may be an IP number, but I, behind my username, am a fully formed albeit a bit short, quite rational, human being. I even have a real life somewhere, I just forgot where I put it. Editor at CPBring TK back 14:56, 12 February 2008 (EST)
- Huh I was sure there was a Real life article somewhere. Editor at CPBring TK back 14:57, 12 February 2008 (EST)
- It's in Fun:Real life. Go figure. --AKjeldsenGodspeed! 14:58, 12 February 2008 (EST)
I found the diff, it was added by user:Neocon Wandal. I will change it back to "dan" and link to user talk:Neocon Wandal. human
15:16, 12 February 2008 (EST)
- I support adding a link to his analysis on the main page. DoggedamesP 15:19, 12 February 2008 (EST)
- Why? (and how?) human
15:38, 12 February 2008 (EST)
- Because its Dan's interpretation. DoggedamesP 15:46, 12 February 2008 (EST)
- That doesn't really answer either of my questions. human
15:56, 12 February 2008 (EST)
- Yes. DoggedamesP 15:59, 12 February 2008 (EST)
[edit] Jason BeDuhn reference
One striking case of an unpopular but accurate translation is the Jehovah's Witnesses 'New World Translation', translating John 1:1 "was a god" where more popular versions use "was God" according to "Truth in Translation: Accuracy and Bias in English Translations of the New Testament" by Jason David BeDuhn.
After further research, I'm not too convinced of this -- it's hard to tell, but BeDuhn's position on the NWT is evidently a minority position, and I suspect some further commentary on this matter may be required before putting it back in the article. (Or, more likely, moving it to Guide to Bible translations, as it may not really belong here.) EVDebs 01:48, 27 May 2008 (EDT)
- Looking at bible translations by languages, I was particularly surprised at this one from En Levende Bok (Norwegian): "Før noe var skapt, var Kristus hos Gud. Han har alltid vært levende og er selv Gud." (Before anything was created, Christ was with God. He has always been alive and is himself God.) Way to jump out of the literal translation and go straight for the jugular... "Oh, everyone knows that 'word' was code for Jesus Christ in this" uh... maybe so, but it's a bit outrageous to go and make this change... it's like translating the second amendment as: "A state needs a militia to be safe. The right of the militia to bear arms shall not be infringed." It's like, wait what? You just made a critical jump in reasoning there... even if a bunch of precedence says that's what it should be... you can't just throw it in there. As well, the Vulgata (Latin) says: "in principio erat Verbum et Verbum erat apud Deum et Deus erat Verbum" (In the beginning there was the word, and the word was with God, and God was the Word). Of course, Latin lacks articles, so they would be confused as to if it were "was a god" or "was God", right? Not the case, Latin didn't make such fine distinctions between the definitiveness of a noun. This "Deus" was the "Deum" referred to earlier, just like "Verbum" is the same as the other two "Verbum" that were before it. In particular, the expression is in the form "God was the word", which syncs with the Luther translation revised in 1984: "Im Anfang war das Wort, und das Wort war bei Gott, und Gott war das Wort." (In the beginning was the word, and the word was with God, and God was the word.) In both situations the subject of the copula is "Deus/Gott/God" and the predicate is "Verbum/Wort/Word". If the noun "God" was taken to be indefinite that would yield: "a god was the Word", which doesn't quite make sense does it? Especially since this form reads like a generalization statement, like "a man is mortal", which Latin has a different form for (I'm pretty sure). So, yeah, I don't find any support in the authenticity of this being an "accurate" translation in the NWT. --Eira omtg! The Goat be praised. 04:56, 27 May 2008 (EDT)
- Any translation is also an interpretation, whether the translators admit it or not. As such, it can always be debated whether a literal translation is more "accurate" than a dynamic, or even ideomatic one which tries to convey the meaning of the text. In the case of John 1:1, there can't really be any doubt that λόγος refers to Christ, so translating it as such is not necessarily inaccurate. It simply reflects an editorial decision on the part of the translators to translate it dynamically rather than formally.
- As for the "God/a god" question, the original of course has καὶ ὁ λόγος ἦν πρὸς τὸν θεόν, καὶ θεὸς ἦν ὁ λόγος. In a purely grammatical view, as Koine Greek doesn't have an indefinite article, either of those possible translations would be 'accurate.' Of course, once you look at the context as well, it becomes apparent that "God" is most likely the translation that best reflects what the author meant, but then it becomes a question of interpretation rather than grammatic accuracy. --AKjeldsenCum dissensie 06:57, 27 May 2008 (EDT)
- They did have iron in 100 AD. ;-) Of course you're right that we can't know with absolute certainty what the author actually had in mind, especially in the case of sources such as the Gospel of John where we don't even know who the author was. However, there are certain techniques, such as textual criticism and historic analysis, that allow us to at least make a well-founded educated guess as to his intentions. For instance, when he writes of "the word", "logos", we know that this term carries a whole range of different meanings and connotations in Hellenic philosophy and that he, being literate and educated in Koine Greek, would be familiar with those.
- Speaking more generally, it should come as no surprise that I believe thinking about historical issues is never silly, since doing exactly that is what puts food on my table. ;-) However, while the purely practical applications of history may be limited - I don't really subscribe to the old saying about learning from history or being doomed to repeat it - I still think that it's important to study it, not only because it tells the whole tale of who we are and how we came to be what we are today, the different ways we interpret history are also a great reflection of our societies and cultures. They way history is generally studied and understood today is vastly different from e.g. the late 19th century, and that change reflects a broader change in society. --AKjeldsenCum dissensie 08:31, 27 May 2008 (EDT)
- It's interesting to note here that it continues to be "καὶ θεὸς ἦν ὁ λόγος" (And God was the Word) Especially since I know that Greek has a tense called the Aorist... I'm a little confused as to how it fits in with things (mostly because of influence of the Quenya Aorist). Does Greek use the Aorist tense for generalized statements? Like "a sheep eats plants."? If this is the case, then by not being in the Aorist tense, it would certainly rule out a reading of "and a god is the word". In either case, I have to agree... no matter how you look at it, you almost have to construct a meaning in order to make "a god" fit (and when I'm a believing Christian I don't believe in the Trinity... I do believe in reading texts accurately though.) The likelihood of the author switching from proper+definite into indefinite+definite just seems really really unlikely... regardless of the language used. The most common and likely interpretation of the phrase is "was God", and like you said... everything is replete of construing meaning in order to fit a particular ideological point (see: "almah" = "virgin" vs "young girl") --Eira omtg! The Goat be praised. 10:11, 27 May 2008 (EDT)
- That sounds reasonable, although I'm afraid the Aorist tense is a bit too advanced for my limited Greek skills. Interpreted could probably enlighten on the subject. --AKjeldsenCum dissensie 12:22, 27 May 2008 (EDT)
[edit] A snarky summary of the Bible
Maybe we should start a "snarky summary of the Bible" mainspace article or something... or simply "A summary of the Bible", and then we can link to it from here, then we can track changes and tweaks more accurately for that summary. What are your thoughts on either the content of the section, or my idea presented here? --Eira omtg! The Goat be praised. 06:47, 27 May 2008 (EDT)
- This is exactly what we have the funspace for. I even put the template up just for you. Have at it, fair knight!
Radioactive Misanthrope 07:12, 27 May 2008 (EDT)
- I undid the massive change to "Dan's" section - can we copy the diff over to a "fun" article, perhaps? I also whapped the fun template in the process, again, a diff can be used to get the red link and start the page... ħuman
15:36, 27 May 2008 (EDT)
- If I might ask, who is Dan, and why is he exempt from the "prepare to have your writing edited mercilessly" clause? --AKjeldsenCum dissensie 16:00, 27 May 2008 (EDT)
- I think there is a link to his user name. And it just seems a shame to mercilessly edit his bon mots since they are so amusing. Which is not to say that someone else can't do a bang-up job along a similar vein. Of course, if the consensus is to fold, spindle, and mutilate this lovely piece of early site SPOV, who am I to argue wit de mob? ħuman
19:02, 27 May 2008 (EDT)
- [Aschlafly]I have reviewed the edits[/Aschlafly] and must say I prefer Eira's version. Or rather, that is not entirely precise - actually I dislike both versions immensely for their ruthless assault on an innocent and defenceless historical source, but at least I dislike Eira's version considerably less (this may optionally be considered a compliment). Also, Dan or Neocon Wandal or whoever he was seems to have visited us only for the briefest of moments, while Eira is still around so I can shout at her. --AKjeldsenCum dissensie 19:14, 27 May 2008 (EDT)
- I was hoping Eira's version would end up being the fun: article. I suppose I should have copied it there... as far as the "innocent and defenceless" thing, das Bibel has many "divisions", I think. ħuman
19:32, 27 May 2008 (EDT)
- <pedantic>Actually, it's "Die Bibel"</pedantic> My intentions with most of the summary was to keep the snarkiness there while making the summary much more accurate and biblically based. Moses was a murderer (that's why he fled into the desert), the prevailing beliefs are that Moses gave the people the Pentateuch, and thus is responsible for most of the material in Genesis... it may be from oral traditions handed down, however he is the one to have codified it. So, essentially, the Bible kind of works like a typical Western documentary/novel in that it starts from the very beginning even if the main character doesn't come in until mid-way through. If the Bible were done in the way common in Anime, it would start with Moses and then reveal the pre-Moses time through flashback, or story. Essentially, the idea of starting in the middle of the story, because it introduces the most important characters. It also does a good job of splitting up the Bible into three-ish sections (that don't entirely superscribe all of the bible) the part revealed and codified by Moses, the Gospels, and the post-Jesus works. Of course, I'm prepared to entertain ideas of people on how to further make the summary more biblically accurate, or such. Kind of the idea is "This is a snark, but entirely honest summary of the bible, if you want to read the version where we've not excluded lying, see here: Fun:Summary of the Bible" --Eira omtg! The Goat be praised. 02:06, 31 May 2008 (EDT)
[edit] Fun?
Hi, now that we have a Fun:Bible article, is this Fun Bible 2: The Revenge, or what? Should some funniness be taken away from this article and maybe inserted into the Fun one? (Editor at) CP:no intelligence allowed 05:39, 31 May 2008 (EDT)
- No, I see no reason to move the funny stuff here into the Fun: article. I think more so the idea is, that everything here, even if funny, is at least justifiable. In the Fun: namespace, it's all fair game. I mean, if this article is going to start off with the standard RW premise of "this is a book of superstitions" then we're already going to be pissing off the Christians a ton, so who cares if we have a laugh while doing so... The fun article though? We can mention "Jesus 2: The Return", where Zombie Jesus comes back to infect the world and bring about a new undead Christian army to throw down the secular athiests of the world... just as an example. I basically started the Fun: article from the summary of the Bible, but we are under no means to keep honest, truthful, or even sane while editing the Fun: article. --Eira omtg! The Goat be praised. 20:48, 1 June 2008 (EDT)
- Again, our mission is not to piss off Christians. Some among us are Christians. This is not Atheistwiki. And to start saying that "The Bible is a book of fairy tales" is either "fun" or plainly stupid. (Editor at) CP:no intelligence allowed 14:09, 2 June 2008 (EDT)
- Funny, see, here I thought our purpose were laid out here: RationalWiki
- Our purpose includes the following:
- Refuting and analyzing the anti-science movement, ideas and people
- Refuting and analyzing the full range of crank ideas, why do people believe stupid things?
- Essays and works on right wing authoritarianism, religious fundamentalism, and other social and political constructs
- Who are the largest opponents to anti-science except literalist christians. You cannot say that the Bible is literal truth, without also saying that science has found a number of things that entirely disagree with the bible. Refuting and analyzing the full range of crank ideas (biblical literalism) and "why do people believe stupid things" like, that the Bible is possible even accurate. As well, our essays and works are all centered either on disestablishing authoritarianism (libertarian-ist goal) religious fundamentalism (anyone who thinks the bible is actual history and truth) and other social and political constructs (like, talking about libertarianism, it's downfalls, critically analyzing not only our opponents of debate, but ourselves.) This is not AtheistWiki, but the first and very foremost position of this wiki is that Christianity cannot be demonstrated rationally to be any more truthful than any other religion. Including the flying spaghetti monster, or the invisible pink unicorn. If you don't like that rational thought construes the bible as an anthology of allegorical and metaphorical ideas that are designed to guide a christian loosely towards a better life. Then, tough beans, that's mob rule. Just because a book is full of fairy tales doesn't mean it doesn't have applicability to real life. Aesop's Fables, and the Grimm Fairy Tales (not the disneyified versions) contain fundamental information that is true even to this day, and in some ways, help give a more open and understanding view of the world than the Bible does.
- So, no this site isn't intended to piss of Christians, but it's sure designed to slap them across the face anytime they try and claim that the Bible is Truth. --Eira omtg! The Goat be praised. 22:22, 2 June 2008 (EDT)
- And no, this article is not "honest, truthful, or even sane", atheist or not. (Editor at) CP:no intelligence allowed 14:09, 2 June 2008 (EDT)
- But you must admit there are a number of fairy tales in the Bible. To the point where it becomes very difficult to wheedle out historically accurate passages. There is also beautiful poetry, sage aphorisms, psychedelic visions, and some horrific glimpses into a violent, racist and misogynistic past. Rational Edthink! 15:30, 2 June 2008 (EDT)
- As for this point, my summary of the Bible is honest, truthful, and sane. And biblically and historically based as well. Please, I invite you to challenge anything I wrote in it. --Eira omtg! The Goat be praised. 22:23, 2 June 2008 (EDT)
- I don't disagree with anything you said up there, but would like to point out that biblical literalism and crank are not mutually exclusive. Lyra § talk 22:39, 2 June 2008 (EDT)
- One thing: The theory for the lineage thing is that all Jews are descendants of Sarah, Rebekah, Rachel, Leah, Abraham, Isaac, and Jacob. Lyra § talk 23:06, 2 June 2008 (EDT)
- Six of one, half-a-dozen of the other. While you don't have to be a crank to believe in biblical literalism, biblical literalism is not a rational belief. *shrug* I mean, if it makes you feel better saying "hey, actually an idiot is someone under 70 IQ points, not a republican", well, go ahead, because you're correct. :) --Eira omtg! The Goat be praised. 23:08, 2 June 2008 (EDT)
[edit] One of those "problems with using the Bible as literal truth" isn't really one
The bullet that says "taken literally, no one is to have sex with a man" happens to completely ignore the clause "as with womankind." Applying this clause, the commandment would only forbid female bisexuality. Either that, or it would forbid women from not having sex with men they are sleeping next to. I doubt that strawmen like that bullet, however, are permissible on RW, but I'm just a newbie so what do I know? Pear 11:33, 16 July 2008 (EDT)
- HUH? The statement is not geared for women, literally or otherwise. It says simply "Do not lie in the bed with a man, as you would with a woman". Now, perhaps that means other men's beds have cooties, but it has nothing to do with woman on woman sex. It is a fairly straight forward statment that sex with other men is not acceptable. When you look at the Leviticus passage in context, of course, you will note that it is for the followers of Levi, that is, it's for the Priests. It's likely the prohibition was purely for the stricter standards of the priesthood. Also, if you put it into context with the culture at large, you'll note that having sex between men was --though not common-- accepted in the so-called "pagan" religions.--WaitingforGodot 11:34, 16 July 2008 (EDT)
- Actually I rather agree with our unsigned editor - it's a bit of a long shot. There is plenty more unreality in the bible without grasping at this straw. Waiting - I don't know if you've read the part he's complaining about in the article. --Bobbing up 11:36, 16 July 2008 (EDT)
- Right. I just removed the bullet in question. Pear 11:52, 16 July 2008 (EDT)
- Yeah, i have, and in hebrew. ;-) The passage (Levi 18:22, of course) says in hebrew:"V’et zachar lo tishkav mishk’vey eeshah toeyvah hee." Which is literally translated as "And with a man you shall not lay lyings of a woman." (note, there is no prepositional "in", in the lyings. So you go figure what it actually meants. but the "as with a woman" is an english (KJV) translations that brings up Pear's problems with the "as with womankind". (the whole "kind" thing gets on my nerves, as well. damn did King James like flowery translations).--WaitingforGodot 11:43, 16 July 2008 (EDT)
- What I mean is - Have you read what we say in the article?--Bobbing up 11:44, 16 July 2008 (EDT)
- OHHHHHHHH, my bad. I'm Sorry Pear! I'd read it when i first came here, and didn't realize Pear was talking about the line in the article. *thwaps myself on the head*. --WaitingforGodot 11:50, 16 July 2008 (EDT)
- Welcome to my life. :) Pear 11:57, 16 July 2008 (EDT)
- Oops, sorry, I put it back in before reading this... although I can't tell from this if a consensus has been reached yet... ħuman
16:00, 16 July 2008 (EDT)
- that's cause one of us (me) was off on planet Pluto or something (oh, wait, it's not a planet... no wonder i'm so lost) and was arguing a totally different point. Pear is correct. You should use her(?) text, i think, as it is humerus!--WaitingforGodot 16:07, 16 July 2008 (EDT)
[edit] Leviticus 18:22
Taken literally, anyone who has sex with a man is a sinner. Leviticus 18:22 says "thou shalt not lie with mankind as with womankind; it is an abomination." Since it says "thou" and not "men", read literally, this prohibition applies to everybody. That means that all women would have to be chaste or lesbians, and all men would be out of luck.
Considering that women were almost universally illiterate during the time the bible was written, it can be safely assumed that the bible was directly aimed at men. Therefore, this seems like an inaccurate statement. — Unsigned, by: 68.197.72.206 / talk / contribs
- I think the statement is pretty stupid & problematic (although for slightly different reasons) & I've taken it out. It's really scraping the barrel when we make those kind of weird pedantic interpretations. Ŵêâŝêîôîď
Methinks it is a Weasel 15:28, 20 March 2009 (EDT)
[edit] Logical book
Andy keeps saying the Bible is the most logical book, reading and translating the Bible are the most logical things you can do, logical logical Bible logical. Since Andy's not going to, can someone explain what that means? --Syndrome (talk) 03:08, 13 December 2009 (UTC)
- If it is so logical, why does it need retranslated? Shouldn't it already be the epitome of logic? Aboriginal Noise Theist, barely hanging by a nail 03:11, 13 December 2009 (UTC)
- No one really knows, Andy is insane. ħuman
03:14, 13 December 2009 (UTC)
- Actually it's not that hard to figure out. He means that the true Bible stories are the most logical ever written. However, they have been corrupted by liberal translators. Fortunately, conservative insight can determine the true meaning of the Bible without consulting the original text. Makes perfect sense, doesn't it? Tetronian you're clueless 04:43, 13 December 2009 (UTC)
- I've never understood the whole "logical" or "scientific" thing either. For the record, it's not just Schlafly that says it a lot, it happens a lot. I once had to deal with someone who tried to claim that, even if they grew up in a Muslim country with strict laws and strict religious upbringing and censorship of the Christian faith, they'd still convert to Christianity because "it's the most consistent and logical religion". Even if they'd never read the Bible ever.
d hominem 16:16, 13 December 2009 (UTC)
- If anything, I'd say Confucianism (to the extent it's a religion at all) would be way more logical. After all, being logical and orderly was the point. --Kels (talk) 16:48, 13 December 2009 (UTC)
- Is there, in fact, any requirement for religions to be logical? While there may be a requirement for their believers to believe they are logical there would seem to be no real reason for them to actually be logical. In fact, given that every religion relies, at some level, on supernatural or miraculous beliefs the requirement would seem to be for all religions to be non-logical.--BobNot Jim 17:01, 13 December 2009 (UTC)
- Are you guys fuckin' kidding me? I thought this was a wiki for rationalists. There's only one reason Andy Schlafly misuses the word "logical" in this context - it lends credence to something that is by its nature inherently illogical: believing in an undetectable but omnipotent and omniscient being. I am a Catholic because I believe as a simple matter of faith in the Holy Trinity and the divinity of the person of Christ, not because I've seen empirical evidence that these things are true. Conservapederast (talk) 17:07, 13 December 2009 (UTC)
Andy is a Vulcan!
17:44, 13 December 2009 (UTC)
[edit] The Bible isn't all logical.
The Kosher section lists bats under birds. Everyone knows bats are mammals. --67.81.182.58 (talk) 01:46, 21 December 2009 (UTC)
- (Sorry, that was me.) --Crazyswordsman (talk) 01:48, 21 December 2009 (UTC)
[edit] Conservapedia:The Conservative Bible Project
I didn't write the reference to the Conservapedia:The Conservative Bible Project, it's been there since at least 2008 and those who follow Conservapedia know Schlafly is rewriting the Bible. Tyrannis took it out though RatWikians thought it worth keeping for 3 to 4 years. That put me into a difficult position, if I let Ty do that it would harm the wiki, if I restored it that would provide Trollfood. Perhaps WaitingforGodot, may feel like investigating the Conservative Bible Project as a case study in how supposed sacred texts get rewritten to suit some agenda or other. That is one way new heresies, new denominations and new religions start. Proxima Centauri (talk) 08:30, 23 March 2012 (UTC)
- No one gives a fuck, and no CP in mainspace. Тyrant 15:02, 23 March 2012 (UTC)
- I'm sure that we give CP enough publicity as it is. Let's keep it out of mainspace.--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:32, 23 March 2012 (UTC)
- It's probably pretty well known that I am not a fan of CP on our mainspace. They are small time idiots. Wanna talk about conservatives, let's focus on AIG, or Ken Ham, or Pat Robertson - if they want to rewrite the bible, it's very much main space news. As a "process", they didn't even have a process other than looking at the King James and saying "oh, i don't like that word. let's change it". That's so out of the range of how ANYONE works with ancient texts. every word out of Andy's mouth about it "i have a good dictionary" is just sillyness. And really such sillyness it's not worthy of main space. Now, discussions of why KJV is so bloody favored might be really nice.--
Godot What do cats dream about? 15:40, 23 March 2012 (UTC)
I'm going to take a position I normally don't re: CP in the mainspace. Because the CBP got CP a fair amount of mainstream cred--including a spot on Jon Stewart--it really should be here. P-Foster Talk ""Santorum is the cream rising to the top."" 15:45, 23 March 2012 (UTC)
- it was that noticed? --
Godot What do cats dream about? 15:48, 23 March 2012 (UTC)
- Jon Stewart = pretty well what all of my friends watch, and their first encounter w/ CP. P-Foster Talk ""Santorum is the cream rising to the top."" 15:50, 23 March 2012 (UTC)
- It was the Colbert Report, but yeah. -- Seth Peck (talk) 15:51, 23 March 2012 (UTC)
- True. Hadn't thought of that. Ok, I'm neutral on it.--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:55, 23 March 2012 (UTC)
I stand by my position that nothing cp related should be in mainspace.ТyAhoy! 15:56, 23 March 2012 (UTC)
- Fair enough - I would be in favor of linking to our article on it, if we have on, but not really *addressing* it here, as this is already pretty long and dense. Maybe a mention in the "translation" sections. "Translating the bible is a tricky matter for experts, who require years of studies in the language, though there have been exceptions like the attempts of Conservatpida to make a "conservative" bible". or some such. But nothing in depth. my opinion, anyhow.
Godot What do cats dream about? 15:57, 23 March 2012 (UTC)
- Translation is what Biblical scholars (e.g., experts in ancient fiction written in old languages/dialects) do. CBP is agenda-driven semantic revision. The parallels to Newspeak or loaded language are certainly on-mission, but NONE of the articles on CP deserve as much attention they are being/have been given. -- Seth Peck (talk) 16:03, 23 March 2012 (UTC)
[edit] I'm not one to usually post stuff from r/atheism, but...
Anyone feel they can explain the mindset behind this? All I can come up with is an argument similar to what creationists use to argue the "gaps in the fossil record" PRATT except reversed. -- Seth Peck (talk) 22:24, 11 April 2012 (UTC)
- There may not be a lot of years between when the New Testament was written and the supposed oldest copy... but is that oldest itself unaltered, does the modern copy look like it, and has it been interpreted consistantly for its entire run? Or is the oldest known copy peppered with corrections, is the modern copy in a totally different order, and have people interpreted the text to mean just about anything they want to? This chart omits a staggering amount of information. ±
KnightOfTL;DRlongissimus non legeri 22:35, 11 April 2012 (UTC)
- It's really quite incredible, and completely ignores that the other books have copies that didn't survive, so why does that have anything to do with reliability? And if you think about it, what does that say about, for example, Harry Potter novels or the phonebook? -- Seth Peck (talk) 22:46, 11 April 2012 (UTC)
- Ok, so uh the oldest known copy of the Bible that contains the Old and New Testamate. . And this doesn't even get into the number of times that people haven't needed to cut it up to change its meaning. Or the times it's just been ignored entirely, rendering its meaning for those in seats of power practically null. Oh, those wacky renaissance popes. And their incest and their orgies. And pretty much everything else they ever did. ±
KnightOfTL;DRsufficiently advanced argument still distinguishable from magic 22:55, 11 April 2012 (UTC)
[edit] Counter Conservapedia Bible Project
Daniel Dennet has suggested that religions be reformed so that their toxic elements are not so harmful. I know that many of you, like me, are atheists and couldn't care less about what the Bible says, but we have to recognize that the memes contained within its covers or on the pages of its PDFs. Think of it as a literary exercise. We could potentially create an even bigger schism between religious apologists and fundamentalists by rewriting their holy book so that it doesn't contain so many errors. I know that in principle, this is flawed. What I am proposing is of the same nature of the Conservapedia project: propaganda. They want to propagandize the Bible by deleting or editing text so that it galvanizes their ideological base and convinces them that 'Gott mit uns' and to support their political agenda in the 21st century. We all know that the Bible itself is a propagandist work of fiction. It's connections to God are a demagogue's wet dream and will continue to trouble us. The Dalai Llamma recently tweeted, "I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether." I say we help Christians along with this and water down their Holy book. We can continue to promote Atheism to try to prevent the world from ending in holy war, but if politicians can continue use the KJV to promote violence and homophobia I think it will prevent us from achieving our final objective. Imagine no religion. — Unsigned, by: 24.69.47.65 / talk / contribs
- "Imagine no religion". I spent the day reading vicious attacks from an "in group" of male gamers to an "out group" of female gamers who wanted changes. It was vile. It was tribal. It was "us v. them". It was dehumanizing. It was very HUMAN. nothing changes if you get ride of religion. Something else takes it over.
- Besides, the single best way to get a Christian to think about their religion, is to have them actually READ the bible, as is. That sounds like a platitude, but there are a few studies that suggest it really is the best way to get people thinking. "why would god want you to stone a raped woman?" "why would god say to go and not only kill the men who you are at war with, but RAPE the women and slaughter the innocent children." actually reading it, especially with someone who knows how to disect it, is a very useful propaganda. :-) --
GodotWhy is being ignorant something to be proud of? 04:57, 15 June 2012 (UTC)
- -imagines a world with no religion- well, some great music, art pieces and other things would be gone... you'll find this amazing BON, but not everybody wants to crush religion. --il'Dictator Mikal 05:01, 15 June 2012 (UTC)
- Hello BON, your ideas intrigue me and I would like to subscribe to your newsletter. I would be happy to join your project. I've been rewriting the Book of Psalms in accordance with my own religious views. I've done about 50 odd so far, which sounds like I'm a third of the way through, but probably less, because I started with the shorter ones. See here. Zack Martin
05:37, 15 June 2012 (UTC)
[edit] The God of the Bible
Made humans in (complex/multiple genders) image.
Until Adam and Eve ate the apple they went around naked (or possibly occasionally wore protective clothing - against brambles etc). So God goes around naked but in 'Religious Art with a Capital A' when in his masculine aspect wears clothes - why (and who makes them).
Also, God in his masculine aspect must be uncircumcised. He also has 'hinder parts' which (forget whom in the Bible) saw.
What else can be deduced about his appearance? 82.44.143.26 (talk) 15:51, 3 August 2012 (UTC)
- You do realize that any pictures of the Christian/Jewish god are artist renditions, yes? As such, complaining about why he's wearing clothes in certain pictures is as worthless as asking why Jesus appears to be white/black/Asian/Korean/Arabic in XYZ painting.
- You state that the Christian/Jewish god must be uncircumcised, but this position is unsupported by any evidence even Biblical. In fact, were such a god omnipotent, then he could be uncircumcised one second, circumcised the next, and uncircumcised again the next. Without a specific source for the Christian/Jewish god having "hinder parts" the claim is fruitless. ... After all, there a hojillion copies of the Bible online, go to one, search for "hinder parts" and back up your claim.
- TL;DR: depictions of gods vary as more often than the artists themselves. Nothing significant can be derived from them. --Eira OMTG! The Goat be Praised. 03:19, 4 August 2012 (UTC)
Syllogism - Man is made in God's image; the Abrahamic covenant, therefore God is uncircumcised.
'Given the number of deities shown at least partially clothed' the idea is worth considering (even if merely as a way of passing the time).
The biblical quotation is Exodus 33:18-34:9)
Any further discussion can probably decamp to [1]. 171.33.222.26 (talk) 18:23, 28 October 2013 (UTC) | http://rationalwiki.org/wiki/Talk:Bible | CC-MAIN-2015-14 | refinedweb | 8,183 | 70.73 |
On 28 Jun 2003 at 18:45, Jeff Turner wrote:
> On Sat, Jun 28, 2003 at 07:29:49AM +0100, Upayavira wrote:
> > On 28 Jun 2003 at 11:59, Jeff Turner wrote:
> ...
> > Okay. For the CLI, the cli.xconf file is the equivalent of the
> > web.xml and the user agent.
> >
> > Now, normally the user agent requests a URI, and that's it. It is up
> > to the user agent as to what to do with that URI.
>
> Oh I see. Yep, makes sense that the 'user agent' be the one who
> decides whether or not to chase down links.
>
> > Are you saying that you want to put the configuration as to where
> > pages should be placed into the sitemap?
>
> No, that's the user agent's (CLI's) business.
Good.
> > Or an alternative would be to ask: can you always do your link view
> > with a single XSLT stage? If so:
> >
> > <map:match
> > <map:generate
> > <map:transform
> > <map:transform
> > <map:transform <map:serialize/>
> > </map:match>
> >
> > So there's no hidden link gatherer. And you've got a single xslt to
> > filter, etc. Not specifying src="xxx" skips the xsl stage. The
> > output of this xsl would be xml conforming to a predefined
> > namespace.
>
> Having eliminated the dont-follow-these-links use-case, I don't see a
> use-case for XSLT transformations, so it simplifies to
>
> <map:transform
So are you saying you can manage without the XSLT stage? Perhaps I should
explain what I had in mind a bit more with that - I guess I would call it a tee, a pipeline
element with one input and two outputs. The input is passed unchanged on through
to the next stage in the pipeline. But it is also passed through an XSLT before links
are gathered from it.
Are you saying you can manage without this?
> It certainly fixes the hard-wired'ness problem you mention above (that
> 'content' != XML before the serializer).
And it sounds as if it could be a trivial solution.
Upayavira | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200306.mbox/%3C3EFD7701.4322.3730626@localhost%3E | CC-MAIN-2015-22 | refinedweb | 336 | 74.9 |
Check with another constructor for htmlView:
ContentType mimeType = new System.Net.Mime.ContentType("text/html");
var htmlView = AlternateView.CreateAlternateViewFromString(bodyMessage,
mimeType);
Wrap the list in an object:
@ResponseBody Users
where
class Users {
public List<User> items;
}
And yes, brackets [] are used for JSON arrays.
OK, so I found the answer to my not so well worded question. When I ran
easy_install, it never told me that lmxl had actually failed to install
since I was missing a compiler. I have no idea why it worked from the
Django development server and not through Apache, but I found and installed
a binary distribution of lxml and it all started working the way it should.
Apple recently disallowed developers from accessing the device's UDID
(Unique Device Identifier), but some third party libraries haven't updated
yet. In particular, some people have been having problems with Google
Analytics. Another post on here recently gave a pretty good answer: App
rejected, but I don't use UDID
I don't think the records are being destroyed by clearing the store's
underlying Collection. I would suggest using the store's removeAll method
which will do a destroy on each record and hopefully free up some of the
resources used by each.
If your store is dealing with large datasets then this might add up and
cause a slowdown.
Since you are just entering gitrep, it's just saving it in your current
directory (which is apparently your home directory, judging from your
example above).
Check and see if ~/gitrep and ~/gitrep.pub exist. You'll need to copy the
contents of the gitrep.pub file to the destination when it asks you for
your public key.
You can test what is the bottle neck by taking different parts out of the
equation. To take out the CPU, just read the files and don't do anything
else. To take out the disk, keep reading the same file again and again,
assuming it fits in your file cache. Compare this to the time for
everything.
In Ruby, the until block doesn't use do, so you should do something like
this:
def bubble_sort(arr)
until arr == arr.sort
(arr.count - 1).times do |i|
(arr[i], arr[i + 1] = arr[i + 1], arr[i]) if (arr[i] > arr[i + 1])
end
end
arr
end
Edit: Expanded the code so it includes the algorithm from the original
question. This code works for me on ruby 2.0.0p247 (2013-06-27 revision
41674) [x86_64-darwin12.4.0].
There is no built-in language support for such large numbers. You have two
options:
if you can, use existing library, like GMP
implement you own solution
If you decide to take the second path, you might want to consider storing
digits (not necesserily decimal) in an array, and perform arithmetic
operations using well known school algorithms. Keep in mind it will be
(probably considerably) less efficient than heavily optimized library code.
Spring is trying to initialize the GraphDatabaseService during startup.
C:UsersAnthonyDocuments omcat7apache-tomcat-7.0.40in argetconfig-test
eostore is the default location configured. It makes sense to have the
neo4j db in a directory outside of the tomcat folder structure. I suggest
you find the spring config file where the GDB is being initialized and
changes it to something like below
<neo4j:config storeDirectory="C:home
eo4j-db"/>
Apple App Store isn't accent sensitive, which means: no, you don't need to
put both of the keywords. I made a search using both of the keywords in
mobileaction dashboard and you can see everything is the same for the
keywords except search score, for your reference here is screenshot:
Alright, after a lot of try's i searched the fora's of OpenCart better and
came upon this thread:
OpenCart Fora Thread
The anser there is as follows:
There's a "top" flag you set to make it show up as a top level menu
So what i did is this:
I've edited the new main category ( with uid 355 ) and flagged it as top,
that didn't helped, but i left it on.
Then i've edited all the direct childs of the main category and flagged
them as top and foila, there they are. the problem was solved.
So with my edit to the main core, and the flagging of the categorie's, the
problem is now solved
Google Custom Search! All you have to do is enter the site(s) you're trying
to search.[][1]
[1]:
Unfortunately Magento only allows you to set "customer sharing" across the
global and website scopes. The (maybe not so simple) solution would be to
move your other store to a separate website instead of a separate store.
Found in System > Configuration > Customers > Customer Configuration:
It's been a while since I've used ExtJS, but I believe that you can use
store.getRange() to get the behavior that you want (with no arguments, it
should return the full contents of the store). According to the
documentation, filtering is taken into account for this, so you should be
able to apply whatever filters you need to get the desired set of records.
As I understand your question, you want to remove the Index data from
BigMemory Go on restart.
Whether you can do this or not depends on your persistence strategy. If it
is local restartable, BigMemory Go will try to either reuse the index data
if the application was shutdown cleanly, or it will recover the indexes if
it was shutdown abruptly. So, for a persistence strategy of local
restartable, there is no way to avoid rebuilding the data.
If your persistence strategy is "localTempswap", then you will not rebuild
any data on restart and will lose the application's state.
Please see here for more information:
You can have a look at the source of the cd function by writing
source cd
There you can find that for convenience, other datatypes are converted to
file.
"Why is a string name of a dir treated as a word when no such word has
been set?"
Rebol recognizes words by syntax allowing symbolic programming. It does not
matter at all whether a word has been set or not to be recognized as a
word.
"[If] I set a word to a dir name but in upper-case, the get-word moves me
to that dir and shows it in that incorrect upper-case"
Some operating systems (such as Windows) try to be case-insensitive,
assuming that this is more convenient for humans.
Rebol string handling is also case-insensitive by default for the same
reason.
"Did anyone ever suggest that chdir could have all convenience
There is no difference.
save is an alias for commit that was introduced with this PR (Make #commit
an alias for #save)
Hope it helps
The TRANSITION doc says that you can do this to inject the store into
components :
App.inject('component', 'store', 'store:main');
You might be able to change 'component' to 'view' or to 'model', but I'm
not sure about that.
You won't be able to accomplish what you want using the store's filters
because these methods all end up filtering on the whole dataset. So you'll
need to apply your filter yourself!
In order to do that, we need to replicate the code from
Ext.data.Store#filter, except for the part that restores the whole dataset
before filtering.
That gives us:
// -- Ensure that our current filter is not stalled ----------------
// Clear our filtering if the query string has changed in a way
// that invalidate the current filtering
if (
// previous filter is stalled
) {
myStore.data = myStore.snapshot;
delete myStore.snapshot;
}
// -- Create your new or updated filter ----------------------------
var filter = new Ext.util.Filter({
filterFn: function(record) {
// your filtering
The deliveranceToStore method isn't correct. Why are you calling the method
recursively?
The method can simply be:
public int deliveranceToStore(int store) {
unitsInStore = unitsInStore + store;
return unitsInStore;
}
If there is no need to return the number of units in store with this call,
you should have the return type as void (i.e., if updating the count is
sufficient):
public void deliveranceToStore(int store) {
unitsInStore = unitsInStore + store;
}
For withdrawal, you need a similar strategy where unitsInStore is updated:
public void withdrawal(int units) {
if(unitsInStore - units >= 0) {
unitsInStore = unitsInStore - units;
} else {
System.out.println("Unable to withdraw. Insufficient units in
store.");
}
}
You can also make the
This might surprise you:
v.push_back(A(20));
v.push_back(A(10));
std::sort(begin(v), end(v));
There are aspects of vector itself that require assignability, though I
don't know, offhand, which (and I can't tell by compiling your code, since
my compiler doesn't complain when I remove operator=()). According to
Wikipedia (which references the relevant portion of the '03 standard),
elements must be CopyConstructible and Assignable.
EDIT: Coming back to this a day later, it seems forehead-slappingly obvious
when std::vector requires Assignable — any time it has to move elements
around. Add a call to v.insert() or v.erase(), for example, and the compile
will fail.
From this thread:ath ...
You are only checking if !IsPostBack, you also need to check if the
Context.User.IsAuthenticated
Which means, Forms Authentification (or other), IIdentity, IPrincipal, and
tralala
Check this out
ASP.NET MVC - Set custom IIdentity or IPrincipal
$.get is asynchronous.
That means, its callback function will execute sometime in the future,
when the request successfully fetches a response. The code below the $.get
call executes synchronously, it does not wait for asynchronous operations
to finish.
Code that depends on the data asynchronously retrieved through $.get must
be executed/called from inside the $.get callback:
$.get(url, function(data) {
$("#household_store").data(data);
//got response, now you can work with it
var household = $("#household_store");
var head = household.data('head');
alert(houseId + ": " + head);
});
If you prefer, it is also possible to attach callbacks by using the
$.Deferred-compatible jqXHR methods:
$.get(url).don
Your commands are wrong !
Go to the directory where your main.cpp is, and try the following.
g++.exe -Wall -c -g main.cpp -o objDebugmain.o
g++.exe -static -static-libgcc -static-libstdc++ -o "binDebugHello
World.exe" objDebugmain.o
then you do not need dll's longer to copy.(for your hello world program)
other Notes:
the mingw installation instructions recommend to set
c:minGW;c:MinGWin;
to path environment variable.
Normally with (try it with following 3 at once)
-static -static-libgcc -static-libstdc++
linker options. Should work. But not for ibwinpthread-1.dll
try also a clean before compile
These's no '-static-something' command.
Only standard libraries libgcc and libstdc++ can be set to static linking.
For other libraries, you first switch to static linking with "-
yeah, that is true. that's the case why, at some point, you just get rid of
old migrations and use the schema.rb
This is one of four different methods in the instructions to install
virtualenv. What's stopping you from using one of the other ones?
In fact, the site specifically says in a few different places that you
should use pip if you already have pip 1.3 or later, use the source install
if you don't have pip or have 1.2 or earlier.
So, just follow the other instructions:
$ curl -O
$ tar xvfz virtualenv-X.X.tar.gz
$ cd virtualenv-X.X
$ [sudo] python setup.py install
The [sudo] is a convention meaning "Type whatever command you need to run
the rest of this as root, which is sudo on most setups on most Unix
systems."
On Windows, there is no such thing as sudo; runas or start with appropriate
flags is the closest subs
Because you're already doing what a call without parenthesis does.
$user->tweak === $this->belongsToMany('User', 'friends', 'sender_id',
'receiver_id')->get();
$user->tweak() === $this->belongsToMany('User', 'friends',
'sender_id', 'receiver_id');
and if you say that
$user->tweakTwo() === self::results()->get() ===
$this->belongsToMany('User', 'friends', 'sender_id',
'receiver_id')->get();
then
$user->tweakTwo() === $user->tweak
and
$user->tweakTwo
would be
$this->belongsToMany('User', 'friends', 'sender_id',
'receiver_id')->get()->get();
The problem isn't calling Math.ceil - it's using the result of it.
Math.ceil returns a double, which can't be implicitly converted to float.
You could cast it though:
perU30F = (float) Math.ceil((under30FY / totalWatchers) * 100);
Or you could just use double everywhere instead of float :)
(Math.round has an overload which accepts and returns float; Math.ceil
doesn't.)
Have your function in this way.,
void Ydisplay(int D1[])
{
cin >> a; //Remove getting input from main()
for(int i=0;i<a;i++)
{
cout<<' '<<D1[i];
}
I might have missed an HQL syntax element
SELECT user FROM CodeUser codeUser
JOIN FETCH codeUser.user user
JOIN FETCH codeUser.code code
WHERE user.firstName = 'Dave' AND code.value = 'abode'
assuming Code has a field value holding the value "abode". You might not
need the FETCH.
You can always replace the literal values with a placeholder like ? or a
named placeholder like :name and set their values from the Query object.
Here's a list of the programs you can install on windows:
And you'll need the following dependencies:
Requires numpy, dateutil, pytz, pyparsing, six
Checkstyle (or at least some of its rules) needs the compiled classes in
addition to the sources. You can prevent passing of the compiled classes
(and thus compilation) with tasks.withType(Checkstyle) { classpath =
files() }, but it may have negative consequences on the analysis.
Maybe Action is what you are looking for:
public xx(Action a)
{
a();
}
xx(()=> Process.Start("notepad.exe", @"C:\OutputLog.txt"));
That's a tricky one. This:
child pid 13633 exit signal Bus error (7)
means it's dying somewhere - most likely (although not guaranteed) in the
PHP code.
I would suggest:
1) Make sure that that is actually the error that is occurring when the
Apache daemon dies. I.e. do whatever you do that stops Apache and look in
the log and see that that's the correct error.
2) If it is, enable XDebug on your Apache server and find out what line
it's dying on. (XDebug can take a little work to set up
but it's a good tool to determine where in PHP your stuff dies ). If not,
post what the last error(s) is/are.
nb. Magento is not noted for being a simple and easy to debug code base, I
stopped using it for that reason (not because it was dying, because it is
fat and complex). Ju
I've never used Ganymed but just by looking at the code I suspect that you
send password too early. You probably need to handle what server responds
to you. In other words server receives not what you think it receives.
My best guess is that the hosted code is at a different location from where
it should be for a default VS solution.
The solution building is copying the code from you solutionproject folder
into the different location.
Try to check the location of the actual deployment. Depending upon whether
you are using IIS, IIS Express there are different ways to know that.
The following should work the same.
<path id="ant.tasks">
<fileset dir="lib" includes="myspecialant.jar"/>
</path>
<taskdef name="TaskName" classname="mypackage.MyClass"
classpathref="ant.tasks"/>
I prefer to manage my classpaths at the top of my build separate to the
logic that uses them. Make troubleshooting simpler.
enums are not compile-time constants. So their value is not copied by the
compile to every class that uses them. That's different from int values,
which can be compile time constants.
So if you have a class like
public class Constants {
public static final int FOO = 1;
}
and have another class
public class Client {
public static void main(String[] args) {
System.out.println(Constants.FOO);
}
}
the class will print 1. Now change the declaration of FOO to
public static final int FOO = 27;
And recompile Constants without recompiling Client. Execute the Client. The
printed value will still be 1, because it has been copied by the compiler
to the Client class when the Client class was compiled.
There is no way to have this effect using enums. If you store a value a | http://www.w3hello.com/questions/Is-a-key-value-store-appropriate-for-a-usage-that-requires-ldquo-get-all-rdquo- | CC-MAIN-2018-17 | refinedweb | 2,717 | 64.41 |
Building.
The WSS Web Services are part of the Microsoft.SharePoint.SoapServer namespace. The web services include methods to accessing and customizing SharePoint site content such as lists, site data, forms, meetings, document workspaces, and permissions. The table below shows the WSS web services, including a description of what they can be used for and the reference.
MOSS 2007 also exposes a series of web services. The MOSS web services allow you to use the Business Data Catalog, document management, Enterprise Search, Excel Services, InfoPath Forms Services, and Web content management (WCM). The table below shows most of the web services, there use and the reference you need to use them.
If you want to use one of the SharePoint web serrvices in your visual studio project you can simply add a Web Reference using the path of the SharePoint site for which you want to use the Web Service and add the web service reference to it. Your reference will then look something like this http://[servername]/[sites]/[sitecollectionname]/[sitename]/[subsitename]/_vti_bin/[webservicereference].asmx.
For more information on WSS Web Service look here.
More information on MOSS Web Services can be found here.
Pingback from » SharePoint Web Services
Pingback from Links (7/22/2008) « Steve Pietrek - Everything SharePoint
SharePoint in the News Beware of 'Chaos' SharePoint Can Create (PC World) A report from Forrester
Hi !
Can I publish and approve a page via sharepoint web services??
I can do a checkin and checkout in Lists webservice but I cant find how to publish.
Thanks in advance. | http://www.sharepointblogs.com/mirjam/archive/2008/07/22/sharepoint-web-services.aspx | crawl-002 | refinedweb | 257 | 62.17 |
enh wrote:
> On Fri, Nov 6, 2009 at 02:28, Mark Hindess <mark.hindess@googlemail.com> wrote:
>> In message <4AF3F7DE.3020400@gmail.com>, Tim Ellison writes:
>>>.
>> Agreed. I think that is what elliott was suggesting earlier in this
>> thread. Definitely seems like we should fix the semantics of these tty
>> checks so they aren't just about stdin.
>
> yes and no. i was restricting myself to the question "how should we
> query the number of available bytes?". on Unix, the answer is with the
> FIONREAD ioctl. so my claim was really:
>
> * there should be a "static native int available(FileDescriptor fd)"
> method generally available
> * the Unix implementation should use the FIONREAD ioctl.
> * any platform-specific hacks (such as, say, if Windows really needs
> the three-seek hack) should be in the native code for the broken
> platforms.
> * any Java should use the intention-revealing
> "available(FileDescriptor)" method and not concern itself with how
> that works on any given platform.
>
> we've done that for Android. (if Windows forces you to treat files and
> ttys completely differently, your life will be more difficult, but i
> still think that should be in the platform-specific code. as you
> pointed out with the "/dev/tty" example, on Unix it's not just
> inefficient, it's wrong.)
>
> for the bigger question of BufferedReader.readLine, i don't think we
> should be calling "available" at all here. i think somewhere there
> must be the equivalent of a "readFully" rather than a "read", and
> that's the mistake that needs fixing. fix that, and the "available"
> hack can go completely.
I just found a way can remove the in.available() check here. Instead of checking
in.available, I check whether the last char read from stream is '\n'.
Index: modules/luni/src/main/java/java/io/InputStreamReader.java
=====================================================================
--- modules/luni/src/main/java/java/io/InputStreamReader.java
+++ modules/luni/src/main/java/java/io/InputStreamReader.java
@@ -246,14 +246,9 @@ public class InputStreamReader extends Reader {
while (out.hasRemaining()) {
// fill the buffer if needed
if (needInput) {
- try {
- if ((in.available() == 0)
- && (out.position() > offset)) {
- // we could return the result without blocking read
- break;
- }
- } catch (IOException e) {
- // available didn't work so just try the read
+ if ((out.position() > offset) && out.get(out.position() -
1) == '\n') {
+ // we could return the result without blocking read
+ break;
}
int to_read = bytes.capacity() - bytes.limit();
--
Best Regards,
Regis.
>
> try this:
>
> #include <unistd.h>
> int main() {
> ssize_t n;
> char buf[1024];
> while ((n = read(0, buf, sizeof(buf))) > 0) {
> write(2, ">", 1);
> write(2, buf, n);
> }
> return 0;
> }
>
> if you run that and type "hello" then return, then "world" then
> return, you'll see (mixing your input and its output):
>
> hello
>> hello
> world
>> world
>
> because it's the terminal's job to make sure you get data at the end
> of a line. (or not, if so configured, but if someone has configured
> their terminal that way, that's their decision, they should expect
> programs' behavior to change, and we certainly shouldn't go around
> reconfiguring the terminal.)
>
> by contrast, if you run this (where my program above has been built as
> "read") like this:
>
> echo -e "hello\nworld\n" | ./read
>
> you see:
>
>> hello
> world
>
> i.e. both lines arrive as a single chunk of data.
>
> (you didn't say exactly _how_ the test fails, so i'm just guessing
> it's related to this.)
>
> --elliott
>
>> Still need to look at the available natives too since elliott suggested
>> that we should replace the 3*seek with a single ioctl for all cases
>> (though this might not be as portable).
>>
>> -Mark.
>>
>>>>>.
>>
>>
>
>
>
--
Best Regards,
Regis. | http://mail-archives.apache.org/mod_mbox/harmony-dev/200911.mbox/%3C4AF8EDA6.9080703@gmail.com%3E | CC-MAIN-2017-09 | refinedweb | 600 | 65.93 |
Link: D. Shichikuji and Power Grid
There are n new cities located in Prefecture X. Cities are numbered from 1 to n. City i is located xi km North of the shrine and yi km East of the shrine. It is possible that (xi,yi)=(xj,yj) even when i≠j.
Shichikuji must provide electricity to each city either by building a power station in that city, or by making a connection between that city and another one that already has electricity. So the City has electricity if it has a power station in it or it is connected to a City which has electricity by a direct connection or via a chain of connections.
Building a power station in City i will cost ci yen;
Making a connection between City i and City j will cost ki+kj yen per km of wire used for the connection. However, wires can only go the cardinal directions (North, South, East, West). Wires can cross each other. Each wire must have both of its endpoints in some cities. If City i and City j are connected by a wire, the wire will go through any shortest path from City i to City j. Thus, the length of the wire if City i and City j are connected is |xi−xj|+|yi−yj| km.
Shichikuji wants to do this job spending as little money as possible, since according to her, there isn’t really anything else in the world other than money. However, she died when she was only in fifth grade so she is not smart enough for this. And thus, the new resident deity asks for your help.
And so, you have to provide Shichikuji with the following information: minimum amount of yen needed to provide electricity to all cities, the cities in which power stations will be built, and the connections to be made.
If there are multiple ways to choose the cities and the connections to obtain the construction of minimum price, then print any of them.
Input
First line of input contains a single integer n (1≤n≤2000) — the number of cities.
Then, n lines follow. The i-th line contains two space-separated integers xi (1≤xi≤1e6) and yi (1≤yi≤106) — the coordinates of the i-th city.
The next line contains n space-separated integers c1,c2,…,cn (1≤ci≤1e9) — the cost of building a power station in the i-th city.
The last line contains n space-separated integers k1,k2,…,kn (1≤ki≤1e9).
Output
In the first line print a single integer, denoting the minimum amount of yen needed.
Then, print an integer v — the number of power stations to be built.
Next, print v space-separated integers, denoting the indices of cities in which a power station will be built. Each number should be from 1 to n and all numbers should be pairwise distinct. You can print the numbers in arbitrary order.
After that, print an integer e — the number of connections to be made.
Finally, print e pairs of integers a and b (1≤a,b≤n, a≠b), denoting that a connection between City a and City b will be made. Each unordered pair of cities should be included at most once (for each (a,b) there should be no more (a,b) or (b,a) pairs). You can print the pairs in arbitrary order.
If there are multiple ways to choose the cities and the connections to obtain the construction of minimum price, then print any of them.
Examples
input
3
2 3
1 1
3 2
3 2 3
3 2 3
output
8
3
1 2 3
0
input
3
2 1
1 2
3 3
23 2 23
3 2 3
output
27
1
2
2
1 2
2 3
Note
For the answers given in the samples, refer to the following diagrams (cities with power stations are colored green, other cities are colored blue, and wires are colored red):
For the first example, the cost of building power stations in all cities is 3+2+3=8. It can be shown that no configuration costs less than 8 yen.
For the second example, the cost of building a power station in City 2 is 2. The cost of connecting City 1 and City 2 is 2⋅(3+2)=10. The cost of connecting City 2 and City 3 is 3⋅(2+3)=15. Thus the total cost is 2+10+15=27. It can be shown that no configuration costs less than 27 yen.
Problem solving:
这道题的意思就是给你n个城市,现在要为这n个城市通电。每个城市自己建造一座电站的花费告诉我们了,每两个城市通过电线连起来的花费是这两个城市的曼哈顿距离与这两个城市拉电线的花费之和的乘积。现在问你能给n个城市通电的最小花费。
我们可以用最小生成树写。首先设置一个超级源点,让这个点与n个城市建边,边的权值为这个城市建造电站的花费。再给每两个城市之间建边,边的权值是这两个城市拉电线的花费。然后最小生成树写一发就行了。还有个很恶心的东西就是输出的控制。不过也很好实现。
Code:
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int maxn = 2e6 + 10; ll f[maxn]; struct node { ll x, y; } p[maxn]; struct Node { ll u, v, w; } P[maxn]; bool cmp(Node a, Node b) { return a.w < b.w; } #define pil pair<ll, ll> bool cmp2(pil a, pil b) { return a.first < b.first; } ll find(ll x) { return x != f[x] ? f[x] = find(f[x]) : x; } void join(ll x, ll y) { x = find(x), y = find(y); if (x != y) f[x] = y; } vector<pil> Ans, AAns; ll r[maxn]; int main() { ios::sync_with_stdio(0); ll co, n, pos; cin >> n; pos = n + 1; for (int i = 0; i <= n; i++) f[i] = i; for (int i = 1; i <= n; i++) { cin >> p[i].x >> p[i].y; } for (int i = 1; i <= n; i++) { cin >> co; P[i].u = i, P[i].v = 0, P[i].w = co; } for (int i = 1; i <= n; i++) cin >> r[i]; for (int i = 1; i <= n; i++) for (int j = i + 1; j <= n; j++) { ll mid = (abs(p[i].x - p[j].x) + abs(p[i].y - p[j].y)) * (r[i] + r[j]); P[pos].u = j, P[pos].v = i, P[pos++].w = mid; } sort(P + 1, P + 1 + pos, cmp); ll ans = 0, MMid = 0, mmid = 0; for (int i = 1; i <= pos; i++) { if (find(P[i].u) != find(P[i].v)) { join(P[i].u, P[i].v), ans += P[i].w; if (P[i].u != 0 && P[i].v != 0) MMid++, Ans.push_back(make_pair(P[i].u, P[i].v)); else mmid++, AAns.push_back(make_pair(P[i].u, P[i].v)); } } cout << ans << endl; cout << mmid << endl; sort(AAns.begin(), AAns.end(), cmp2); for (int i = 0; i < AAns.size(); i++) { ll a1 = AAns[i].first, a2 = AAns[i].second; if (a1) cout << a1 << " "; else cout << a2 << " "; } cout << endl; cout << MMid << endl; for (int i = 0; i < Ans.size(); i++) { ll a1 = Ans[i].first, a2 = Ans[i].second; if (a1 > a2) swap(a1, a2); cout << a1 << " " << a2 << endl; } }
可能我的代码写得有点丑,不过思想到位了就行。我也懒得改了。 | https://cndrew.cn/2019/11/07/cf592/ | CC-MAIN-2019-47 | refinedweb | 1,139 | 81.63 |
exists, but for some reason
there is no precompiled package for your current settings. Maybe the package creator didn’t build
and shared pre-built packages at all and only uploaded the package recipe, or maybe they are only
providing packages for some platforms or compilers. E.g. the package creator built packages
from the recipe for gcc 4.8 and 4.9, but you are using gcc 5.4.
By default, conan doesn’t build packages from sources. There are several possibilities:
- You can try to build the package for your settings from sources, indicating some build policy as argument, like
--build libzmqor
--build missing. If the package recipe and the source code work for your settings you will have your binaries built locally and ready for use.
- If building from sources fail, you might want to fork the original recipe, improve it until it supports your configuration, and then use it. Most likely contributing back to the original package creator is the way to go. But you can also upload your modified recipe and pre-built binaries under your own username too.
ERROR: Invalid setting¶
It might happen sometimes, when you specify a setting not present in the defaults that you receive a message like this:
$ conan install . -s compiler.version=4.19 ... ERROR: Invalid setting '4.19' is not a valid 'settings.compiler.version' value. Possible values are ['4.4', '4.5', '4.6', '4.7', '4.8', '4.9', '5.1', '5.2', '5.3', '5.4', '6.1', '6.2'] Read ""
This doesn’t mean that such architecture is not supported by conan, it is just that it is not present in the actual
defaults settings. You can find in your user home folder
~/.conan/settings.yml a settings file that you
can modify, edit, add any setting or any value, with any nesting if necessary. See Customizing settings.
As long as your team or users have the same settings (you can share with them the file), everything will work. The settings.yml file is just a mechanism so users agree on a common spelling for typically settings. Also, if you think that some settings would be useful for many other conan users, please submit it as an issue or a pull request, so it is included in future releases.
It is possible that some build helper, like
CMake will not understand the new added settings,
don’t use them or even fail.
Such helpers as
CMake are simple utilities to translate from conan settings to the respective
build system syntax and command line arguments, so they can be extended or replaced with your own
one that would handle your own private settings.
ERROR: Setting value not defined¶
When you install or create a package, it is possible to see an error like this:
ERROR: Hello/0.1@user/testing: 'settings.arch' value not defined
This means that the recipe defined
settings = "os", "arch", ... but a value for the
arch setting was
not provided either in a profile or in the command line. Make sure to specify a value for it in your profile,
or in the command line:
$ conan install . -s arch=x86 ...
If you are building a pure C library with gcc/clang, you might encounter an error like this:
ERROR: Hello/0.1@user/testing: 'settings.compiler.libcxx' value not defined
Indeed, for building a C library, it is not necessary to define a C++ standard library. And if you provide a value, you might end with multiple packages for exactly the same binary. What has to be done is to remove such subsetting in your recipe:
def configure(self): del self.settings.compiler.libcxx
ERROR: Failed to create process¶
When conan is installed via pip/PyPI, and python is installed in a path with spaces (like many times in Windows “C:/Program Files…”), conan can fail to launch. This is a known python issue, and can’t be fixed from conan. The current workarounds would be:
- Install python in a path without spaces
- Use virtualenvs. Short guide:
$ pip install virtualenvwrapper-win # virtualenvwrapper if not Windows $ mkvirtualenv conan (conan) $ pip install conan (conan) $ conan --help
Then, when you will be using conan, for example in a new shell, you have to activate the virtualenv:
$ workon conan (conan) $ conan --help
Virtualenvs are very convenient, not only for this workaround, but to keep your system clean and to avoid unwanted interaction between different tools and python projects.
ERROR: Failed to remove folder (Windows)¶
It is possible that operating conan, some random exceptions (some with complete tracebacks) are produced, related to the impossibility to remove one folder. Two things can happen:
- The user has some file or folder open (in a file editor, in the terminal), so it cannot be removed, and the process fails. Make sure to close files, specially if you are opening or inspecting the local conan cache.
- In Windows, the Search Indexer might be opening and locking the files, producing random, difficult to reproduce and annoying errors. Please disable the Windows Search Indexer for the conan local storage folder. | https://docs.conan.io/en/1.17/faq/troubleshooting.html | CC-MAIN-2022-27 | refinedweb | 848 | 60.24 |
I am new to programming but i have done basic input output operation, now i want to go for link list programming starting from single list.
so, before that i am having problem with DEv C++ comipler it is not compiling source file.
code is:->
Code:
#include <iostream> int main() { using namespace std; cout << "hey"; system("pause"); return 0; }
Select All
and error listed below are.
i:\gw\lib\crt2.o(.text+0x8) In function `_mingw_CRTStartup': [Linker error] undefined reference to `__dyn_tls_init_callback' [Linker error] undefined reference to `__cpu_features_init' i:\gw\lib\crt2.o(.text+0x8) ld returned 1 exit status
This post has been edited by JackOfAllTrades: 15 May 2012 - 08:58 AM | http://www.dreamincode.net/forums/topic/277866-help-need-for-programming-link-list-in-dev-c/ | CC-MAIN-2016-07 | refinedweb | 113 | 51.38 |
Day Trading Event Analysis
Can we find patterns from a large enough dataset of trades..
All the analysis is done on the Python Pandas library, the data from Quantopian and the reporting on the Python Django platform.
In day trading, there are times when your knowledge and experience go against more often than not, then doing this type of analysis maybe a worthwhile exercise..
Context.
Frame
Some of the questions I want to put forward to the model are:
- Do stocks with different market caps and floats behave differently when gapped after open?
- Does the prior day trend affect movement after open. As the trend could be different based on different times of the day, is it possible to look at the last 15 - 30 minutes of the prior day trend.
- Does the trend of the index the stock is on affect the direction of the gap?
- Can movement in the first minute predict movement in the subsequent 5 minutes?
- Can the above be applied for the next 10,15, 30, 60 minutes? If traded this time after open will it still be a profitable trade?
Process
This will be done as a proper data analysis project implementing the best practices for organisation and making the approach as modular as possible. The directory structure will be based on the format where possible and will follow the sequence of: 1. Getting and formatting data for analysis 2. Creating features 3. Implementing algorithms 4. Presentation
Each of these will be further broken down depending on the scale and complexity of the task.
It is unlikely that the process will be a linear one, but rather an iterative one as discoveries in the future may lead to changing the original frame of the project or even refining some of the questions.
As the data is proprietary and not shareable, it cannot be hosted in the repository, but the code for the data processing will be available either as a Python script or an IPython notebook where available.
Getting the Data Quantopian research notebook. Quantopian gives intra day data up to the minute with a massive number of securities at your disposal. All analysis has to be done within a Quantopian IPython Notebook as data cannot be exported to be locally processed.
The analysis starts by getting a list of NASDAQ securities. These will be used to search for fundamental data in Yahoo Finance. Getting the original list of securities will be done locally and then exported to the Quantopian environment.
List of Securities Traded on the NASDAQ.
tickers = [] letters = [] for i in string.ascii_uppercase: letters.append(i) letters.append('0') for letter in letters: print(letter) resp = requests.get('{}'.format(letter)) soup = bs.BeautifulSoup(resp.text, 'html5lib') table = soup.find('table',{'class':'market tab1'}) for row in table.find_all('tr')[2:]: tickers.append(row.find_all('td')[1].find('a').text) with open('nasdaq.pickle', 'wb') as f: pickle.dump(tickers, f)
On the whole, there are about 3500 ticker symbols available from this process. Not all will be used in the final analysis as only the most tradeable will eventually be selected.
Include Market Cap and Float.
Get Float
nasdaq_stocks = {} with open('nasdaq.pickle', 'rb') as f: nasdaq = pickle.load(f) URL = '{}/key-statistics?p={}' for stock in nasdaq: print ('Getting float for {}'.format(stock)) resp = requests.get(URL.format(stock, stock)) soup = bs.BeautifulSoup(resp.text, 'html5lib') try: stock_float = soup.find('td', {'class':'Fz(s) Fw(500) Ta(end)'}).text except: stock_float = 'UNK' nasdaq_stocks[stock] = {} nasdaq_stocks[stock]['float'] = stock_float with open('nasdaq_stocks.pickle', 'wb') as f: pickle.dump(nasdaq_stocks, f) # Remove tickers where float is unavailable with open('nasdaq_stocks.pickle', 'rb') as f: tickers = pickle.load(f) tickers_final = {} for key in tickers: if not (tickers[key]['float'] == 'N/A' or tickers[key]['float'] == 'UNK'): tickers_final[key] = {} tickers_final[key]['float'] = tickers[key]['float'] with open('nasdaq_stocks_final.pickle', 'wb') as f: pickle.dump(tickers_final, f)
Get Market Cap
URL = '{}/key-statistics?p={}' nasdaq_stocks_final = pickle.load(open('nasdaq_stocks_final.pickle', 'rb')) for stock in nasdaq_stocks_final: print ('Getting Market Cap for {}'.format(stock)) resp = requests.get(URL.format(stock, stock)) soup = bs.BeautifulSoup(resp.text, 'html5lib') try: market_cap = soup.find('td', {'class':'Fz(s) Fw(500) Ta(end)'}).text except: market_cap = 'UNK' nasdaq_stocks_final[stock]['mktcap'] = market_cap with open('nasdaq_stocks_final.pickle', 'wb') as f: pickle.dump(nasdaq_stocks_final, f)
The values for market cap and float in Yahoo are categorical variables. Because they are string characters, they cannot be compared with each other. They need to be converted into values and then binned so they are easier to deal with and analyse.
def get_value_from_string(val='UNK'): # Convert yahoo notation of float and market cap to numbers if val == 'UNK': return 0 if val[-1] == 'B': return float(val[:-1]) * 1000 if val[-1] == 'M': return float(val[:-1]) * 1 return 9999999999 with open('nasdaq_stocks_final.pickle', 'rb') as f: nasdaq_stocks_final = pickle.load(f) dfStocks = pd.DataFrame(index=nasdaq_stocks_final.keys(), columns=['FLOAT','MKTCAP']) for stock in nasdaq_stocks_final: dfStocks.loc[stock]['FLOAT'] = nasdaq_stocks_final[stock]['float'] dfStocks.loc[stock]['MKTCAP'] = nasdaq_stocks_final[stock]['mktcap'] dfStocks['FLOAT_VAL'] = dfStocks['FLOAT'].apply(get_value_from_string) dfStocks['MKTCAP_VAL'] = dfStocks['MKTCAP'].apply(get_value_from_string) dfStocks['FLOAT_SCALED'] = pd.qcut(dfStocks['FLOAT_VAL'],10, labels=False) dfStocks['MKTCAP_SCALED'] = pd.qcut(dfStocks['MKTCAP_VAL'], 10, labels=False) with open('dfStocks.pickle', 'wb') as f: pickle.dump(dfStocks, f)
The final result is a Pandas Data Frame with some fundamental data that can be imported into a Quantopian IPython notebook.
Getting the Stock Price Movements from Quantopian.
Some definitions.
A Pipeline is a feature in Quantopian that lets you look at a large number of stocks and associated data on them. A Pipeline is useful here to confirm if the stock from the list is available in the Quantopian database..
The initial part of the analysis will be converting the symbols in the list to symbol objects in Quantopian.
from quantopian.pipeline import Pipeline from quantopian.research import run_pipeline from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.factors import SimpleMovingAverage from quantopian.pipeline.filters import StaticAssets from quantopian.pipeline.filters import Q1500US import pandas as pd import matplotlib.pyplot as plt import numpy as np df = local_csv('stocks.csv') watchSymbols = symbols(df.columns[:-1]) watchAssets = StaticAssets(watchSymbols)
From here, we need the final list of stocks which Quantopian has data for. A simple way of getting it is to run a pipeline with the stocks in the list. If there is data available, the stocks will be returned in the pipeline.
# Get a list of assets with data available in Quantopian def make_pipeline(): return Pipeline(screen=watchAssets) my_pipe = make_pipeline() result = run_pipeline(my_pipe, start_date='2018-01-05', end_date='2018-01-05') stocks = result.index.get_level_values(1).values.
# Get data for analysis # This will be daily data to compare the close price for the day with the open price for the next day start_date = '2015-01-01' end_date = '2017-12-31' dfOpen = get_pricing(stocks, start_date=start_date, end_date=end_date, fields='open_price', frequency='daily') dfClose = get_pricing(stocks, start_date=start_date, end_date=end_date, fields='close_price', frequency='daily') dfVolume = get_pricing(stocks, start_date=start_date, end_date=end_date, fields='volume', frequency='daily').
# List of events that satisfy a criteria # Greater than 4% and less than -4% have been arbitrarily chosen # The output of this step is to get a list of events by date and security intEvent = 0.04 lEvents = [] for equity in dfOpen.columns: for i in range(1, len(dfOpen.index)): price_today = dfOpen[equity].ix[dfOpen.index[i]] price_yest = dfClose[equity].ix[dfClose.index[i-1]] volume = dfVolume[equity].ix[dfOpen.index[i]] fPriceChange = ((price_today - price_yest) / price_yest) if ( fPriceChange > intEvent or fPriceChange < -intEvent) and (volume > 300000): #print i,equity #dfEvents[equity].ix[df.index[i]] = 1 date_0 = dfOpen.index[i] shift_back = list(dfOpen.index).index(dfOpen.index[i]) -1 shift_front = list(dfOpen.index).index(dfOpen.index[i]) try: date_start = dfOpen.index[shift_back] date_end = dfOpen.index[shift_front] lEvents.append([date_0, date_start, date_end, equity, price_yest, price_today, fPriceChange]) except: pass.
Initial Analysis
The previous posts in this series focused on getting datasets for analysis. This one looks at doing preliminary analysis on the datasets and getting it's self into a format that easily lends it's self to analysis.
Number of Gaps.
def get_change(change): if change >0: return 1 else: return -1 df['GAP'].apply(get_change).value_counts()
Shows a count of 5,000 instances of a gap up and 4,000 instances of a gap down. Broken down into size of float,
pd.pivot_table(df[['FLOAT_LABEL', 'GAP_VAL']], columns='GAP_VAL', index=['FLOAT_LABEL'], aggfunc=np.count_nonzero)
Distribution by Float
Distribution by Market Cap
From the initial analysis, this looks a good distribution of transaction events with a good distribution company sizes.
Create Features.
The outcome of this post will be to create a final dataset with all the features for analysis..
- GAP_DIRECTION - the direction the stock moved from the previous day (1 for a move up, -1 for a move down).
- FLOAT - The size of the companies outstanding share float. To properly feed the algorithms this has been scaled and labelled for proper reporting.
- MKTCAP - The market cap of the company from Yahoo data.
- PD_VOLUME - Prior day volume. There are also features for trends in the last 15 and 30 minutes.
- PD_PRICE - Prior day price movements. For the last 15 and 30 minutes for the previous day.
- OPENING_CANDLES - Shows how price behaved on the open after the gap. There are candles for the first minute and 5 - 60 mins.
- RETURN_PERCENT - Percent return if applying gap strategy for particular trade.
MOVEMENT_E.
Analysis After Features
The purpose of this step is to see if any of the features created in the previous steps have any predictive power on the movement of the price during the day..
The analysis will look at all the created features to see if there is a higher likelihood of closes over runners.
Gappers Up
Below is the relationships the created factors have with the movement of price over the end of day for stocks that have gapped up.
Prior Day Trends
On a cursory analysis, prior day trends don't seem to have any impact on price direction. It is observable by the fact that the gap closers and runners are equally distributed.
Time Based Factors
There don't seem to be any predictive power on time based factors either. Month, Weekday and Year have relatively equal distributions for gap closers and runners.
Company Size
Market cap and Float seem to have a higher number of closers on the smaller valued companies. This then reverses to a smaller number of closers with larger companies.
Market Open
This looks like the most predictive of how the price will move over the day. If the price falls 15 minutes after open then there is a high likelihood that the price will close over the day.
So in summary, small cap companies that have gapped up but with a price fall after open are likely to move in the direction of closing the initial gap by the end of the day.
Gappers Down
The same analysis can be applied to gappers down.
Gappers down seem to be the opposite of gap ups. The larger the company size, the more likely the gap will close over the day.
The trading strategy with opening gaps should favour company size when predicting movement during the day.. | http://findingpatterns.co.uk/day-trading-event-analysis.html | CC-MAIN-2019-09 | refinedweb | 1,876 | 51.34 |
Details
Description.
Activity
+1 on v9. Some minor changes before you check it in -
13. KafkaConfig
Typo - accross -> across
14. LogCleaner
Typo: ellapsed -> elapsed
15. We talked about this offline, but regarding review comment 6.3, I personally like the renaming the .swap file to contain the names of the files it has cleaned, but there might be nuances. e.g. there is a OS limit to the length of a file name. Would you mind filing another bug to track that change ?
Oops, dropped one change I mentioned in the v8 patch. V9 only restores the read and write buffers at the end of the segment to avoid churning on memory allocation (one liner).
Patch v8 includes Jun's comments. Specifically:
Cleaner:
70.1 Nice catch. Buffers now grow in offset map building. Also changed both offset map building and cleaning to keep the same buffer size for the duration of the segment to avoid growing and shrinking too frequently.
70.2 The message payload can be null and this is used to indicate a delete (note that null messages do go into the offset map but never survive a cleaning). Currently though there is no way to set the payload to null and a number of bugs around null payloads. I will be opening a ticket to solve those.
70.3 Usually catching Throwable is a mistake, I think. I.e. if we are out of memory, the thread should die.
Log:
71.1 Removed the comment about rebuilding indexes.
71.2 Improved formatting for log statement in maybeRoll()
71.3 Nice catch. Incrementing truncates count in truncateFullyAndStartAt()
KafkaConfig
72. It would be easily to implement something where the log cleaner starts only if we have a log with dedupe. However it is a little trickier with topics that are dynamically added or for which the config is changed dynamically. I would like to leave it simple/stupid for now and when we have the config change stuff ironed out make the cleaner dynamically start when the first log becomes dedupe-enabled.
LogOffsetTest:
73. Changed log.retention.check.interval.ms to 5*60*1000LogOffsetTest.createBrokerConfig()
74. Added apache header to CleanerConfig, LogConfig, TestLogCleaning.
75. Added a comment for TestLogCleaning.
76. The bytes of a cryptographic hash are supposed to be uniformly distributed, so just using the first 4 bytes should be fine (I have previously tested this and it works well empirically too).
A few more comments:
73. LogOffsetTest.createBrokerConfig(): We should set log.retention.check.interval.ms to 5 mins, instead of 5ms.
74. CleanerConfig, LogConfig, TestLogCleaning: missing Apache header.
75. TestLogCleaning: Could you write in the comment how this test works?
76. In SkimpyOffsetMap, we use only the first 4 bytes (out of 16 byts of MD5) to calculate the array position of the hash. Would it be better to use all of the 16 bytes?
Thanks for patch v7. Looks good overall. Some comments:
70. LogCleaner:
70.1 buildOffsetMap(): need to consider grow readBuffer to accomodate for maxMessageSize.
70.2 celanInto(): Can the payload ever be null?
val retainRecord = lastOffset < 0 || (entry.offset >= lastOffset && entry.message.payload != null)
70.3 CleanerThread.run(): Should we catch all Throwables, instead of Exceptions?
71. Log:
71.1 loadSegments(): The following comment is no longer true since it can happen to a segment with SwapFileSuffix.
if(!hasIndex) {
// this can only happen if someone manually deletes the index file
71.2 maybeRoll(): move .format in debug to a separate line.
71.3 truncateFullyAndStartAt(): This one behaves in the same way as truncateTo and is called directly from ReplicaFetcherThread. So need to increment truncates here too.
72. KafkaConfig: Why do we have log.cleaner.enable? Shouldn't log cleaner be automatically enabled if logCleanupPolicy is dedup?.
Updated patch v6:
- Fixed a bug: messages larger than the I/O buffer would lead to an infinite loop. Now we raise the I/O buffer size when needed.
- Made I/O buffer configurable instead of hardcoding to 1MB. I also no longer subtract this from overall buffer size. The two configurations are now:
log.cleaner.dedupe.buffer.size
log.cleaner.io.buffer.size
Both give the size over all threads so the per-thread size is divided by log.cleaner.threads.
Neha, I think this addresses all your concerns.
Sriram--These are great suggestions. For the most part I am taking these to be "future work" because I don't think they block a "minimally viable product" which is what I hope to get in first. My intuition is to avoid doing anything hard or complicated until we have real operational experience with this functionality because otherwise we end up building a ton of fancy stuff that solves the wrong problems. Patches would be gladly accepted, though.
1. This is a good suggestion. There is an additional assumption which is combining read and write I/O. Read I/O may be coming out of pagecache (shared) or from disks (not shared). Likewise it isn't really the number of disks per se since a RAID setup would effectively pool the I/O of all the disks (making the global throttler correct). We support multiple data directories with the recommendation that each data directory be a disk. We also know the mapping of log->data_directory. If we relied on this assumption we could do the throttling per data directory without too much difficulty. Of course that creates another additional scheduling problem which is that we should ideally choose a cleaning schedule that balances load over data directories. In any case, I think the global throttle, while not as precise as it could be, is pretty good. So I am going to add this to the "future work" page.
2. Yes. In fact the current code can generate segments with size 0. This is okay though. There is nothing too bad about having a few small files. We just can't accumulate an unbounded number of small files that never disappear (some combining must occur). Small files will get cleaned up in the next run. So I knowingly chose this heuristic rather than doing dynamic grouping because it made the code easier and simpler to test (i.e. I can test grouping separate from cleaning).
3. Since you have to size your heap statically in the case of a single thread shrinking the map size doesn't help anyone. Having a very sparse map just makes duplicates unlikely. However in the case where you had two threads it would be possible to schedule cleanings in such a way that you allocated small buffers for small logs and big buffers for big logs instead of medium buffers for both. Since these threads progress independently, though, it would be a bit complicated. Probably the small log would finish soon, so you would have to keep finding more small logs for the duration of the cleaning of the large log. And when the large cleaning did happen, you would probably have a small cleaning in progress so you would have to start another cleaning with the same large buffer size if you wanted memory to remain fixed. However one thing this brings up is that if your logs are non-uniform having non-uniform buffers (even if they are statically sized) could make it so you were able to efficiently clean large logs with less memory provided your scheduling was sophisticated enough. There are a number of gotchas here though.
4. I created a cleaner log and after each cleaning I log the full cleaner stats (time, mb/sec, size reduction, etc).
5. There are three tests in the patch. A simple non-threaded method-by-method unit test. A junit integration test of the full cleaner running as a background thread with concurrent appends. Finally a stand-alone torture test that runs against an arbitrary broker by producing to N topics and recording all its produced messages, then consuming from the broker to a file, then sorting and deduplicating both files by brute force and comparing them exactly. This later test is very comprehensive and runs over many hours and can test any broker configuration. I ran it with multiple threads to validate that case (and found some bugs, that i fixed). I think a third thing that could be done (but which I haven't done) is to build a stand-alone log duplication checker that consumes a topic/partition and estimates the duplication of keys using a bloom filter or something like that. I haven't done the later.
5. Intuitively this should not be true. By definition "independent" means that sequential salt should perform as well as well as any other salt or else that would be an attack on md5, no?
Good stuff.
My feedbacks below -
1. Throttling
The current scheme of throttling will work only if there is one physical disk that kafka uses which I guess is not going to be the case. For multiple disks, the single throttler is not going to prevent some disks from getting saturated. A more accurate but complex solution is to do the following -
- Query the number of physical disks on the machine on startup. Divide the total bytes / sec allowed for the cleaner by this number (This is the value per disk).
- Create a throttler per disk.
- Have a background thread that refreshes the log directory mapping to the physical disk (this is in cases when the log directory gets moved to a different disk)
- Use the appropriate throttler based on the mapping above
This would be one of the ways you can control any single disk from getting saturated.
2. Grouping segments by size
The way the current grouping of segments is done based on size I think will not solve the problem of preventing very small segments. We decide the grouping even before deduplicating. I would assume somebody would choose dedup based GC only if they have lots of updates instead of new records. In such a scenario, all the old segments will eventually tend to 0 after dedup. If you were to calculate the total number of segments based on a max size before dedup, you could end up having very few records in that new segment. A more deterministic way to ensure each segment has a min size is to check the size as you append to the segment. If it has crossed the maxsize and is at the end of a segment boundary, do the swap with the segments read.
3. Determination of end offset
The code currently decides the end offset based on the map capacity. Consider the example you had quoted in the wiki about user updates. Very few active users would generate new events regularly and the majority would have unique records in a given time period. If many of the records within the dirty set gets duplicated you would not be making optimum use of your memory (map would end up being partially filled). I dont have a good answer for this but something to note.One option would be to keep reading the dirty set till you hit the map capacity or the beginning of the active segment.
4. Script to find actual deduplication
As part of your tests do you plan to measure the expected dedup level Vs the actual gain? As long as the gain is close to the expected value it is fine but we do not want it to be way off.
5. Integration tests
Should the integration tests use more than one cleaner thread to catch any corner cases? I could have missed it but I did not find any test that does a sanity check of multiple cleaner threads functioning correctly.
6. Salt generation
Should the salt be randomly generated instead of being monotonically increasing. Empirically I have found it to perform better in terms of providing more uniform distribution given a key namespace.
Great review. Attached patch v5 that addresses most of these issues:
1.1 Fixed "enableClenaer", dedupe is actually a word and is spelled dedupe, though, I think…
2. Changed
3.1 This is hard to explain, but changed it to "the minimum ratio of dirty log to total log for a log to eligible for cleaning"
3.2 Changed to ms.
3.3 Done
4.1. Done
4.2 Ah, nice catch. Fixed. Added test for it.
5.1 "Confusing but sophisticated" is my middle name. Basically I didn't like the code duplication and it seemed nice to see all the criteria whenever we roll. Fixed the ordering.
6.1 Fixed
6.2 I think you are saying we could change this to a require() call, right? Made that change.
6.3 Argh, you're right. I didn't think of that problem. It isn't easily fixable. Let's continue the review and I will think of a fix for this as a follow-up item. It isn't a critical problem because effectively you just duplicate a segment of the log needlessly (but with very low probability). The old segment will mask that portion of the new segment, but I don't think there is any other bad thing that happens.
6.4 CleanerTest.testSegmentGrouping() is a beast
6.5 Done
6.6 It can but it seems reasonable to ask for the last known cleaner point?
7.1 Fixed
7.2 3MB should be enough for anyone.
No the real reason is because I require you to have a 1MB read buffer, a 1MB write buffer which I cleverly subtract from the cleaner total buffer size. I don't think we need to make these configurable since 1MB is a good size (bigger won't help, and smaller will hurt). So you must have at least 2MB, but if you are trying to set a dedupe buffer that is less than 1MB well that is crazy. Maybe this is just me being a little too accurate about memory accounting and a better approach would just be to not count the I/O buffers at all. In that case the question is what is the minimum we should set for the cleaner buffer?
8.1 We can't do topicOverrides since these are full log configs not overrides. I suppose topicConfigs is better in case there was a question of what the String in the map was. Changed that.
9.1 Fixed.
10.1 Fixed
11. Not sure I follow. If you update the time manually and then call tick() we basically do "catch up" executing the tasks in order of next execution and cycling their period until we are caught up. The key point is that the user is the one who advances the time not the scheduler. That is the user says "it is now 12:15" and we execute our backlog of tasks. Perhaps you are saying that it should work the other way where the scheduler advances the clock rather than vice versa?
Reviewed patch v4 -
1. CleanerConfig
1.1 Typos - enableClenaer, dedupeBufferLoadFactor (probably dedupBufferLoadFactor is better?)
2. VerifiableProperties
"If the given key is not present" -> "If the given property is not present"
3. KafkaConfig
3.1 The comment for explaining log.cleaner.min.cleanable.ratio is confusing
"/* the minimum ratio of bytes of log eligible for cleaning to bytes to total bytes which a log must
contain to be eligible for cleaning */"
3.2 The config "log.retention.check.interval.ms" says the retention check is in milliseconds, but the name of the config is logCleanupIntervalMinutes and we multiple this value by 60K before passing it into LogManager
3.3 Can we document the different values for log.cleanup.policy in the comment ?
4. OffsetMap
4.1 Remove unused import "import java.util.concurrent._"
4.2 entries should be updated in put() API
5. Log
5.1 Rolling new log segment in %s (log = %d/%d, index = %d/%d, age = %d/%d)
This log statement got a little confusing but sofisticated. The last part of the statement should be index and last but one should be age
6. LogCleaner
6.1 In the cleanSegments() API, we pass in SystemTime to the LogSegment. However, we already pass in a Time instance to LogCleaner. In order to test it independently, we can pass in MockTime to LogCleaner but we should pass in the same instance to LogSegment for it to work correctly.
6.2 In the cleanInto() API, we log a custom message in the IllegalArgumentException. I'm not sure I quite understood that. Aren't the log segments to be cleaned a mix of previously cleaned segments and yet to be cleaned ones ? Why not just use "require" like we did while building the offsetmap ?
6.3 If the server crashes in replaceSegments() after addSegment() and before asyncDeleteSegment() and let's say 2 log segments (xxxx.log,yyyy.log) were replaced with one new log segment(xxxx.log). Now, when this server restarts, the loadSegments() API will swap in the new xxxx.log.swap as xxxx.log, but it will leave yyyy.log.
6.4 Do we have a unit test to cover the grouping logic in groupSegmentsBySize() API ? It looks correct to me, but I've been bitten by several scala collection append nuances before.
6.5 Remove unused import "import java.util.concurrent.locks.ReentrantLock"
6.6 allCleanerCheckpoints() is only called from within LogCleaner. Can we make this private ?
7. CleanerConfig
7.1 Typo in API doc "enableClenaer" and "clenaer"
7.2 Why 3MB for the minimum buffer space per thread ? Can we keep this configurable as well ?
8. LogManager
8.1 Can we rename configs to topicConfigs or topicOverrides ?
9. LogSegment
9.1 Fix log4j statement for the .log renaming - "Failed to change the index file suffix"
10. ReplicaManager
In checkpointHighWatermarks(), it is better to use fatal("Error writing to highwatermark file: ", e)
11. MockScheduler
Even though this is not introduced in this patch, while reading the code, realized that the MockScheduler actually executes tasks before their nextExecution time is reached. This is because we just check if the nextExecutionTime <= now and then call task.fun() without waiting until nextExecution time.
Did some testing with multithreading, resulting in...
Patch v4:
1. Bug: Log selection wasn't eliminating logs already being cleaned which could lead to two simultaneous cleaner threads both cleaning the same log.
2. Improve logging to always include the thread number.
Attached patch v3. Two small changes:
1. Make memory usage more intuitive now that there is a read and write buffer for each cleaner thread. These are both fixed at 1MB per thread and taken out of the total buffer size given to cleaning.
2. Ensure that each new log segment is flushed before it is swapped into the log. Without this we can swap in a segment that is not on disk at all, delete the old segment, and then lose both in the event of a crash.
I did some testing on the I/O throttling and verified that this does indeed maintain the expected I/O rate. Two gotchas in this, first you can't look at iostat because the OS will batch up writes and then asynchronously flush them out at a rate greater than what we requested. Second since the limit is on read and write combined a limit of 5MB/sec will lead to the offset map building happening at exactly 5MB/sec but the cleaning will be closer to 2.5MB/sec because cleaning involves first reading in messages then writing them back out so 1MB of cleaning does 2MB of I/O (assuming 100% retention).
New patch, only minor changes:
1. Rebased against trunk at 9ee795ac563c3ce4c4f03e022c7f951e065ad1ed
2. Implemented seeding for the offset map hash so that now a different hash is used on each iteration so collisions between cleaning iterations should be independent.
3. Implemented batching in the cleaner's writes. This improves the per-thread performance from about 11MB/sec to about 64MB/sec on my laptop.
4. Add a special log4j log for cleaner messages since they are kind of verbose.
Thanks for the patch. Do you know the revision of trunk on which this patch will apply? I can take a look before you rebase.
This patch implements more or less what was described above.
Specific Changes:
- OffsetCheckpoint.scala: Generalize HighwaterMarkCheckpoint to OffsetCheckpoint for use in tracking the cleaner point. In the future we would use this for flush point too, if possible.
- Move configuration parameters in Log to a single class, LogConfig, to prepare for dynamically changing log configuration (also a nice cleanup)
- Implement a cleaner process in LogCleaner.scala that cleans logs, this is mostly standalone code. It is complicated but doesn't really touch anything else.
- Implement an efficient OffsetMap (and associated tests) for log deduplication
- Add an API in Log.scala that allows swapping in segments. This api is fairly specific to the cleaner for now and is not a public api.
- Refactor segment delete in Log.scala to allow reuse of the async delete functionality in segment swap
- Add logic in log recovery (Log.scala) to handle the case of a crash in the middle of cleaning or file swaps.
- Add a set of unit tests on cleaner logic (CleanerTest.scala), an integration test (LogCleanerIntegrationTest.scala) for the cleaner, and a torture test to run against a standalone server (TestLogCleaning.scala). The torture test produces a bunch of messages to a server over a long period of time and simultaneously logs them out to a text file. Then it uses unix sort to deduce this text file and compares the result to the result of consuming from the topic (if the unique key-set isn't the same for both it throws an error). It also measures the log size reduction.
New configuration parameters:
- should we default to delete or deduce for the cleanup policy?
log.cleanup.policy = delete/dedupe
- per-topic override for cleanup policy
topic.log.cleanup.policy = topic:delete/dedupe, …
- number of background threads to use for log cleaning
log.cleaner.threads=1
- maximum I/O the cleaner is allowed to do (read & write combined)
log.cleaner.io.max.bytes.per.second=Double.MaxValue
- the maximum memory the cleaner can use
log.cleaner.buffer.size=100MB
- the amount of time to sleep when there is no cleaning to do
log.cleaner.backoff.ms=30secs
- minimum ratio of new to old messages the log must have for cleaning to proceed
log.cleaner.min.cleanable.ratio=0.5
I also changed the configuration log.cleanup.interval.mins to log.retention.check.interval.ms because the word "cleanup" is confusing.
New Persistent Data
This patch adds a new persistent data structure, a per-data directory file 'cleaner-offset-checkpoint'. This is the exact same format and code as the existing 'replication-offset-checkpoint'. The contents of the file is the position in the log up to which the cleaner has cleaned.
Current State
This patch is mostly functional with a number of known limitations:
1. It is a lot of code, so there are likely bugs. I think most bugs would only effect log cleaning.
2. The cleaner is somewhat inefficient. Current it does about 11MB/sec. I suspect this can be increased to around 70-100MB/sec by implementing batching of writes. I will do this as a follow-up ticket.
3. I do not properly handle compressed logs. Cleaning will work correctly but all messages are written uncompressed. The reason for this is that logically it is pretty complex to figure out what codec messages should be written with (since there may be a mixture of compression types in the log). Rather then try to handle this now, I think it makes more sense to implement dynamic config and then add a new config for log compression so that each topic has a single compression type that all messages are written with.
4. It would be nice to seed the hash with a different seed for each run so that collisions would get handled in the next run. This will also be done in a follow-up patch.
5. It would be nice to integrate the torture test into the nightly integration test framework (since it is a pass/fail test). I will work to do this as a separate item.
I would like to get this in in the current state and work on making log config dynamic. Without that feature this is not very useful since you have to bounce the server every time you add a new topic to set the cleanup policy. Once that is done we can use it for real features which will likely uncover more issues then further testing now.
Status of Testing
- There is reasonable unit test coverage but I will likely add additional tests as real usage uncovers corner cases
- I can run the torture test for many hours on a few dozen gb of data and get correct results.
Sure, pass the ideas along onto the distribution list and I'll see what chunks I can bite off
Yes, I am actually working on it now (forgot to assign to myself). If you are looking for a cool project I actually have a number of ideas...
Hey Jay, is this something you wanted to take on?
Here is a specific proposal:
We will retain the existing settings that retain segments based on bytes and time, with data prior to these limits left unmolested. We will introduce a new setting for each topic "cleanup.policy"={delete, dedupe}
. cleanup.policy=delete will correspond to the current behavior. cleanup.policy=dedupe will correspond to the new behavior described in this JIRA. As now, data that falls inside the retention window will not be touched, but data that is outside that window will be deduplicated rather than deleted. It is intended that this be a per-topic setting specified at topic creation time. As a short-cut for the purpose of this ticket I will just add a configuration map setting the policy in the way we have for other topic-level settings, these can all be refactored into something set in the create/alter topic command as a follow-up item.
Topics getting dedupe will be processed by a pool of background "cleaner" threads. These threads will periodically recopy old segment files removing obsolete messages and swapping in these new deduplicated files in place of the old segments. These sparse files should already be well-supported by the logical and sparse offset work in 0.8.
Here are the specific changes intended:
- Add a few new configs:
- topic.cleanup.policy= {delete,dedupe}
// A map of cleanup policies, defaults to delete
- cleaner.thread.pool.size=# // The number of background threads to use for cleaning
- cleaner.buffer.size.bytes=# // The maximum amount of heap memory per cleaner thread that can be used for log deduplication
- cleaner.max. {read,write}
.throughput=# // The maximum bytes per second the cleaner can read or write
- Add a new method Log.replaceSegments() that replaces one or more old segments with a new segment while holding the log lock
- Implement a background cleaner thread that does the recopying. This thread will be owned and maintained by LogManager
- Add a new file per data directory called cleaner-metadata that maintains the cleaned section of the logs in that directory that have dedupe enabled. This allows the cleaner to restart cleaning from the same point upon restart.
The cleaning algorithm for a single log will work as follows:
1. Scan the head of the log (i.e. all messages since the last cleaning) and create a Map of key => offset for messages in the head of the log. If the cleaner buffer is too small to scan the full head of the log then just scan whatever fits going from oldest to newest.
2. Sequentially clean segments from oldest to newest.
3. To clean a segment, first create a new empty copy of the segment file with a temp name. Check each message in the original segment. If it is contained in the map with a higher offset, ignore it; otherwise recopy it to the new temp segment. When the segment is complete swap in the new file and delete the old.
The threads will iterate over the logs and clean them periodically (not sure the right frequency yet).
Some Nuances:
1. The above tends to lead to smaller and smaller segment files in the tail of the log as records are overwritten. To avoid this we will combine files; that is, we will always collect the largest set of files that together are smaller than the max segment size into a single segment. Obviously this will be based on the starting sizes, so the resulting segment will likely still be smaller than the resulting segment.
2. The recopying procedure depends on the property that logs are immutable. However our logs are only mostly immutable. It is possible to truncate a log to any segment. It is important that the cleaner respect this and not have a race condition with potential truncate operations. But likewise we can't lock for the duration of the cleaning as it may be quite slow. To work around this I will add a generation counter to the log. Each truncate operation will increment this counter. The cleaner will record the generation before it begins cleaning and the swap operation that swaps in the new, cleaned segment will only occur if the generations match (i.e. if no truncates happened in that segment during cleaning). This will potentially result in some wasted cleaner work when truncatations collide with cleanings, but since truncates are rare and truncates deep enough into the log to interact with cleaning very rare this should almost never happen.
Cool, checked in with those fixes. Filed
KAFKA-739, KAFKA-740, KAFKA-741. Jun if you have any follow-up comments I will do those as a second checkin. | https://issues.apache.org/jira/browse/KAFKA-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555470 | CC-MAIN-2014-15 | refinedweb | 5,000 | 65.22 |
In the process of converting a lot of spaghetti code to organized script modules and multiple functions. One drawback I’ve noticed to this is that sometimes the error messages are truncated and not as helpful or specific as if the script was on the button onAction script. I will just get errors that show a Key I tried access in a dictionary that doesn’t exist, or a type error, but no line number or even what function this occured in. I am trying to mediate that.
What I am looking for is some function I can call in the beginning of other functions so that it will print out the current function and what function called it if possible.
I tried doing it with using the
inspect module per this and doing this in a function called from another function
import inspect print "running " + str(inspect.stack()[1][3]) + " called by " + str(inspect.stack()[2][3])
which just throws me a error like so
I assume this has to do with the odd runtime environment Ignition is, java calling python scripts etc and so I’m not sure if there is a python stack to really access like there would be in a traditional all python application.
Is there another way to do this? I would really like a function I could call as the first line of other functions that will just print “Running function (function name), called by (function name)” | https://forum.inductiveautomation.com/t/printing-the-name-of-the-function-and-who-called-it/37910 | CC-MAIN-2022-27 | refinedweb | 244 | 64.75 |
i have a 15 rpm motor and a rc car esc what would be simple code to run the motor from point a to point b then stop for 30 seconds then repeat or is there a thread i can pointed to
#include <Servo.h> Servo TheESC; // create servo object to control the Electronic Speed Control void setup() { TheESC.attach(9); // attaches the ESC on pin 9 to the servo object } void loop() { TheESC.write(180); // Run the motor at full speed delay(2000); // Wait two seconds (the time it takes to get from 'a' to 'b') TheESC.write(0); // Stop the motor delay(30000); // Wait thirty seconds before repeating }
ok thanks but i instead got a r3 motor shield and this is what i have so far
// i want the motor to do a full rotation then stop by the means of a rocker switch // and do this repeated every 25 sec // i also need a momentary button to override the program and just do one rotation of the motor void setup() { pinMode(12, OUTPUT); //Initiates Motor Channel A pin pinMode(9, OUTPUT); //Initiates Brake Channel A pin } void loop(){ digitalWrite(12, HIGH); //Establishes forward direction of motor digitalWrite(9, LOW); //Disengage the Brake for motor analogWrite(3, 255); //Spins the motor on motor at full speed delay(7550); digitalWrite(9, HIGH); //Eengage the Brake for motor delay(500); } }
You call this a 15RPM motor. I assume it is really a higher RPM motor, with a gear box to reduce the output to 15RPM. Is that correct.
I don't think you want to use servo library.
What is the voltage of the motor? Got a URL?
And a URL to that motor shield would be great also.
Have you given it a try? What does it do, or don't do ?
If you have not yet given it a try, maybe use the blink code to see if you can get it to just run.
Let us know.
as of now i can just make it rotate at speed and yes its a gearbox motor that is from servo city .com | https://forum.arduino.cc/t/operating-a-simple-dc-motor-with-a-r3-motor-shield/177857 | CC-MAIN-2022-33 | refinedweb | 353 | 75.64 |
file into xml DOM document object. The we use methods to access the nodes in the DOM Tree. If you do not understand this terminology then research data structures and tree structures. The name=value pair is called an attribute of the element. So for our demo here I will choose a text based adventure game for the example.
<world> <room name="room name"> <door aDirection="direction" room="room name"> <brief description> Description text. </brief description> <description> Description text. </description> <items> <item name="item name"> <description> description text </description> </item> <item name="item name"/> <description> description text </description> </item> </items> </room> </world>
The above shows the format. Note that we could make a DTD Document Type Definition which defines how this xml document is formed. But its not necessary if you are careful to form it properly in the first place. I have not made any DTDs myself and they do not seem easy to make. But if you used one then when the document is loaded into the DOM, if it did not conform to the DTD then you get errors and are told where they are so you can fix them from what I understand.
Our first example will not use I simply added that to show you that is something that might be needed to make a proper text base adventure game. However simply making a world of rooms you can navigate is quit easy. Now this example uses only one data type... String or text. A more complex example might use all the data types. So we will later create a better example to use all data types for some record type of storage.
So we will load the document the and then get its children the and build a list of a class called Room probably put it in a class called World. Each Room object will contain fields for briefDescription and description. It will also contain a list of Door. A Door can be the cardinal directions of N, S, E and W. And can be U or D for up or down. A door leads to another Room so it must also
contain that rooms name.
So we need some commands in order to navigate the game world. Obviously for navigation we will use n,s,e,w,u and d. To view a rooms description we will use l or look. If a room has already been entered then we need to store a boolean flag for alreadyEntered. So if we come back into the room only the brief description is shown. l would again show the long description. Or entering it for first time shows long description. In this example we will only be reading the xml file. In later example of simple data records for all types I will show reading, altering and writing the xml file.
So we will be look for these types of DOM nodes. Document, Element, Attr and Text.
To load the DOM we must.. import javax.xml.parsers.*; import org.w3c.dom.*; import java.io.*;
we make a File object we make a DocumentBuilderFactory object we get a DocumentBuilder object from the factory. we get a Document object from the DocmentBuilder parser method giving it the File object.
Of course we do this in a try/catch statement.
NodeList roomList = doc.getElementsByTagName("room"); will retrieve a list of all rooms in the world.
So below is the xml for our game world which desccribes roomms to navigate. Then I give you the code for loading the xml and playing
the text adventure at the windows shell console.
<world> <room name="The Void" id="void"> <description> You are standing in a dimly lit room that fades to darkness all around. In the center of the room is what looks like some kind of doorway with swirling blueish light eminating from its center outwards. As you walk around it you see it looks to be somewhat translucent and not solid. Your hand can pass into and through it. Since there is nothing better to do I recommend you enter and pass through the blue light doorway to see what happens. </description> <brief> You are in a dark rooom with a blue colored transluent swirling doorway. </brief> <door dir="enter" room="forest"/> </room> <room name="Forest" id="forest"> <description> This is a fairly thick forest of mixed evergreen and green leafed trees. There are a few large moss covered rocks around. A very small trickling stream flows in from the north and out to the east. A game trail leads to the west and south. Also there is a blue swirling lighted doorway here. I think you came through this and it is what is known as a portal. But should you ever try to go back through it? </description> <brief> You are in a forest with a portal. There is a stream to the north and east and a trail to the west and south. </brief> <door dir="n" room=""/> <door dir="s" room=""/> <door dir="e" room=""/> <door dir="w" room="cliffs"/> <door dir="enter" room="void"/> </room> <room name="Large Cliffs" id="cliffs"> <description> Trees dissapear below a large cliff to the open skies to the west. A game trail emerges from thick forest to the east and seems to end but you can navigate to the north and south along the cliffs bare rock. There is a large branch and log laying here that you have to manouver around. It overhangs the cliff a little. A blue berry bush is next to it. A blue portal is on the cliff to the north. </description> <brief> You are standing on a large cliff next to an overhanging log. Paths lead north and south along the cliff or into the woods to the east. </brief> <door dir="s" room="ridge"/> <door dir="enter" room="island"/> <door dir="e" room="forest"/> </room> <room name="Tropical Island" id="island"> <description> Seagulls squak and fly all around. Tropical palms and undergrowth are all around. In the center of this room is a blue portal. White fluffy clouds mixed with deep blue skies are above. A cocanut lies on the ground here. You can make out through the trees all around what looks to be ocean water and waves. A rat scurries away. Paths are in all directions here. </description> <brief> You are in the middle of a small tropical island with a blue portal. Paths are in all directions. </brief> <door dir="n" room="beach"/> <door dir="s" room="beach"/> <door dir="e" room="beach"/> <door dir="w" room="beach"/> <door dir="enter" room="cliffs"/> </room> <room name="Island Beach" id="beach"> <description> You are standing on a sand/gravel beach. Ocean is out in front of you to the horizon. Tropical forest are behind you. Small blue crabs are scattered all around. You see a starfish in the water. You also see sardines swimming about. A path enters into the tropical forest behind you towards the center of the island. You can walk to the right or left along the beach and around the island. </description> <brief> You are standing on a beach. Ocean is in front of you and palm trees behind you. You may walk to the left or right along the beach. A path enters the palms behind you back to the center of the island. </brief> <door dir="l" room="beach"/> <door dir="r" room="beach"/> <door dir="enter" room="island"/> </room> <room name="South End of Ridge" id="ridge"> <description> This is the bottom of downward slope at the end of a ridge. You can climb up the crest of the ridge to some cliffs above. Trees are sparse here. There is a stand of tall cane that is very thick and you think you hear water running on the other side to the south. There is a clearing to the west that looks like an open field. Woods to the east and slopes slightly upward. There is a crow in the tree above and you hear it 'call' occasionally. An aluminum beer can refuse bottom shines in the penetrating sun beams. </description> <brief> Woods with river cane nearby. Clearing to the west. Woods to the east. Steep incline up the ridge to the north looks passable. </brief> <door dir="n" room="cliffs"/> </room> </world>
And below is the code for playing a text based adventure at the windows shell console. This example is 230 lines of code. All
it does is allow you to enter a few commands and navigate some rooms with text descriptions. It demonstrates a few nice and good to know Java concepts in the process. If you examine the code you will see areas where refactoring needs to take place before this prototype is improved. Refactoring is where you take complex logic and either make new methods or you make new classes and new methods. Where code is long or repditive it begins to smell of needing a refactoring. This was only a prototype so I kept it simple.
One important thing I showed you is how to use Scanner class to read input from the console. Its super simple. Another thing which was not as simple was to remove white space and new lines from Node text content. This demo does not show you how to convert different data types from strings. I am going to write a part 2 of this article to show you how to read and write some simple records using each data type as in previous examples. When you startup this app you are told to enter 'help' command to get a list of commands to use. You can try yourself to add a room into the game by adding one to the end of the xml file. Just use the other rooms as examples on how to build one. No recompoile is necessary to add rooms. If you enter the xml wrong you will get a runtime exception. Some errors are more symantic in that you might put a room id name in the wrong place. Give it a try. Add a room called Pasture that is to the west of the end of the ridge. Or a pond in the woods to the east of the forest.
import java.io.*; import java.util.*; import javax.xml.parsers.*; import org.w3c.dom.*; import javax.swing.*; public class XMLReader{ public class Room{ public void cleanDescriptions(){ description = cleanString(description); brief = cleanString(brief); } /** This cleans xml node text content strings */ String cleanString(String aString){ String[] lines = aString.split("\n"); StringBuilder sb = new StringBuilder(); for (String line : lines) { sb.append(line.trim()+" "); } return sb.toString(); } boolean visited=false; String name=""; String id=""; String description=""; String brief=""; String n=""; String e=""; String s=""; String w=""; String enter=""; String l=""; String r=""; } ArrayList<Room> rooms = new ArrayList<Room>(); public XMLReader(){ File adventureXMLFile = new File("adventure.xml"); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db; String roomName=""; String doorDir=""; String doorRoomID=""; NodeList roomNodes = null; NodeList roomChildNodes = null; Room room = null; Node roomNode = null; Node roomChildNode = null; Node attributeNode = null; NamedNodeMap attributes = null; try { db = dbf.newDocumentBuilder(); Document doc = db.parse(adventureXMLFile); roomNodes = doc.getElementsByTagName("room"); for (int i=0;i<roomNodes.getLength();i++){ roomNode = roomNodes.item(i); room=new Room(); attributes=roomNode.getAttributes(); attributeNode=attributes.getNamedItem("name"); room.name=attributeNode.getNodeValue(); attributeNode=attributes.getNamedItem("id"); room.id=attributeNode.getNodeValue(); roomChildNodes=roomNode.getChildNodes(); for(int c=0;c<roomChildNodes.getLength();c++){ roomChildNode=roomChildNodes.item(c); if(roomChildNode.getNodeName().equals("description")) room.description=roomChildNode.getTextContent(); else if(roomChildNode.getNodeName().equals("brief")) room.brief=roomChildNode.getTextContent(); else if(roomChildNode.getNodeName().equals("door")){ attributes=roomChildNode.getAttributes(); if(attributes.getNamedItem("dir").getNodeValue().equals("n")) room.n=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("e")) room.e=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("s")) room.s=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("w")) room.w=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("enter")) room.enter=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("l")) room.l=attributes.getNamedItem("room").getNodeValue(); if(attributes.getNamedItem("dir").getNodeValue().equals("r")) room.r=attributes.getNamedItem("room").getNodeValue(); } } room.cleanDescriptions(); rooms.add(room); } }catch(Exception e){ e.printStackTrace(); } gameLoop(); } void gameLoop(){ boolean hideBrief=true; Scanner scan= new Scanner(System.in); System.out.println("\nWelcome to the game Portals Adventure. If"+ "you need help just ask (by typing help command).\n"); String command=""; Room currentRoom = currentRoom=findRoom("void"); while(!command.equals("quit")){ if(currentRoom!=null) if((currentRoom.visited==false)){ System.out.println("\n"+currentRoom.name+"\n"); System.out.println(currentRoom.description+"\n"); currentRoom.visited=true; }else{if(!hideBrief){ System.out.println("\n"+currentRoom.name+"\n"); System.out.println(currentRoom.brief+"\n"); hideBrief=false; } } System.out.println("What do you want to do?"); command= scan.nextLine(); if(command.equals("help")){ System.out.println("n goes north"); System.out.println("e goes east"); System.out.println("s goes south"); System.out.println("w goes west"); System.out.println("enter goes through portal or into something."); System.out.println("l goes left"); System.out.println("r goes right"); System.out.println("look displays long description"); System.out.println("examine examines an item"); System.out.println("get item puts item in your backpack"); System.out.println("debug shows debug info about the current room"); System.out.println("quit exits to system"); }else if(command.equals("debug"))debugRoom(currentRoom); else if(command.equals("enter")){ if(currentRoom.enter.equals("")){ System.out.println("\nThere is nothing to enter.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.enter); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("n")){ if(currentRoom.n.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.n); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("e")){ if(currentRoom.e.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.e); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("s")){ if(currentRoom.s.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.s); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("w")){ if(currentRoom.w.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.w); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("l")){ if(currentRoom.l.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.l); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("r")){ if(currentRoom.r.equals("")){ System.out.println("\nYou can't go that way.\n"); hideBrief=true; }else{ currentRoom=findRoom(currentRoom.r); if(currentRoom.visited)hideBrief=false; } continue; }else if(command.equals("look")){ System.out.println("\n"+currentRoom.name+"\n"); System.out.println(currentRoom.description+"\n"); hideBrief=true; continue; }else if((command.length()>=7)&&(command.substring(0,7).equals("examine"))){ System.out.println("\nYou see nothing special about "+command.substring(8,command.length())); hideBrief=true; continue; }else if((command.length()>=3)&&(command.substring(0,3).equals("get"))){ System.out.println("\nYou don't feel like carrying "+command.substring(4,command.length())+" today."); hideBrief=true; continue; }else if(command.equals("quit")){ System.out.println("\nYes stop bingeing on text based adventure games and go get some stuff done. Goodbye!"); break; } System.out.println("\nWTF? I did not understand that command."); } } Room findRoom(String id){ Iterator<Room> roomsIterator = rooms.iterator(); Room aRoom=null; while(roomsIterator.hasNext()){ aRoom=roomsIterator.next(); if(aRoom.id.equals(id))return aRoom; } return null; } void debugRoom(Room aRoom){ System.out.println("Room Debug Info"); System.out.println("Name:"+aRoom.name); System.out.println("Visited:"+aRoom.visited); System.out.println("Description:"+aRoom.description); System.out.println("Brief:"+aRoom.brief); System.out.println("n:"+aRoom.n); System.out.println("e:"+aRoom.e); System.out.println("s:"+aRoom.s); System.out.println("w:"+aRoom.w); System.out.println("enter:"+aRoom.enter); System.out.println("l:"+aRoom.l); System.out.println("r:"+aRoom.r); } public static void main(String args[]){ new XMLReader(); } } | https://www.softwaredeveloperzone.com/reading-writing-xml-data-files-using-java/ | CC-MAIN-2018-13 | refinedweb | 2,683 | 51.75 |
Writing Portable Code in C++'�A Vectors and String Issue.
One problem may arise during the writing of code that runs on multiple platform is the case sensitivity of the file system. Windows preserves the case of files, but its file system is not case sensitive; however, the Unix file system is case sensitive. On Unix, "Bounds.h" and "bounds.h" are two different files. So, be careful to include files in the proper case if you want to run your code on different platforms.
vector<int> vec; for (int iIndex = 0; iIndex < 10; ++iIndex) { vec.push_back(iIndex * iIndex); } random_shuffle(vec.begin(), vec.end());
After that, for any reason, you want to sort a range of elements, but not all elements. Suppose you do not want to sort the first and last elements. It will be something like this:
sort(vec.begin() + 1, vec.end() - 1);
Well, what about writing the same code something like this:
sort(++vec.begin(), --vec.end());
Isn't it equal to the previous line? Well, the behavior of this code depends on the implementation of the standard library you use with your compiler. This code may work fine according to your requirement—or it may give a compilation error. To be more specific, Visual C++ 7.0 compiles this code; Visual C++ 6.0 will give you a compilation error.
To see why this code is not portable, we have to look at the specification of a vector in the C++ standard and its implementation in different libraries. According to the C++ standard, a vector does have a random access iterator and stores its elements in a continuous memory location, just like an array. The natural solution that comes to mind to make an iterator of vectors is a pointer of the template type. All the specifications for the random access iterator would be accessed with this pointer. So, the code to make a vector class is something like what follows. Here, I use upper-case letters to distinguish it from standard libraries' names and omit unnecessary detail.
template <typename _Type> class Allocator { public: typedef _Type* pointer; // other stuff }; template <typename _Type, typename _Allocator = Allocator<_Type> > class Vector { typedef typename _Allocator::pointer _Tptr; typedef _Tptr iterator; // other stuff protected: iterator _First, _Last, _End; };
To begin and end the member function of the vector, just return the pointer of the vector type.
template <typename _Type, typename _Allocator = Allocator<_Type> > class Vector { // other stuff public: iterator begin() { return (_First); } iterator end() { return (_Last); } };
However, some implementations wrap the pointer into a class. Here is the simplest implementation of this; again, extra detail is omitted.
template <typename _Type> class Allocator { public: typedef _Type* pointer; }; // wrap pointer into a class template<class _Type> class _Ptrit { public: // other functions _Type& operator++() { ++current; return (*this); } _Type operator++(int) { _Type _Tmp = *this; ++current; return (_Tmp); } _Type& operator--() { --current; return (*this); } _Type operator--(int) { _Type _Tmp = *this; --current; return (_Tmp); } protected: _Type current; }; template <typename _Type, typename _Allocator = Allocator<_Type> > class Vector { typedef _Allocator::pointer _Tptr; typedef _Ptrit<_Tptr> iterator; // other stuff protected: iterator _First, _Last, _End; public: // other functions iterator begin() { return (_First); } iterator end() { return (_Last); } };
In this implementation, the above code will compile. The same situation may arise in the case of a string because some implementations use char* as a random access iterator.
string str("Hello World"); // may not compile on all implementations sort(++str.begin(), --str.end()); // compile sort(vec.begin() + 1, vec.end() - 1); Variable Scope
doubtPosted by Legacy on 12/15/2003 12:00am
Originally posted by: Debug
[quote]
[end quote]
But
int arr[]={ 1, 2, 3, 4, 5, 6 };
int* ptr=arr;
*(++ptr)=10; // compile & work
Means this that ++ and -- operators does not return rvalue?
Reply
Good Work!!!!Posted by Legacy on 12/13/2003 12:00am
Originally posted by: Nagesh
Hi Zeeshan
Its nice article which focus some light of porting.
Well...I found here you are mostly concetrating on the STL.
Does this mean that only STL has to be has to be taken cared while porting. and other things will remain as per the ANSI standard.
If we are using some O.S specific API,what should we do while porting. I mean is there any article which tells cross-platform API equvalant (functionality wise).
ummmm....The link which will give us O.S wise comparison.
I agree, this is out of scope of this article. But just I want to know about it.
Thank you
Regards
Nagesh | http://www.codeguru.com/cpp/cpp/cpp_mfc/portability/article.php/c4055/Writing-Portable-Code-in-CmdashA-Vectors-and-String-Issue.htm | CC-MAIN-2015-32 | refinedweb | 746 | 56.76 |
Const variables
So far, all of the variables we’ve seen have been non-constant -- that is, their values can be changed at any time. For example:
However, it’s sometimes useful to define variables with values that can not be changed. For example, consider the gravity of Earth (near the surface): 9.8 meters/second^2. This isn’t likely to change any time soon (and if it does, you’ve likely got bigger problems than learning C++). Defining this value as a constant helps ensure that this value isn’t accidentally changed.
To make a variable constant, simply put the const keyword either before or after the variable type, like so:
Although C++ will accept const either before or after the type, we recommend using const other variables (including non-const ones):
Const is often used with function parameters:
Making a function parameter const does two things. First, it tells the person calling the function that the function will not change the value of myValue. Second, it ensures that the function doesn’t change the value of myValue.
When arguments are passed by.
Runtime vs compile time constants
C++ actually has two different kinds of constants.
Runtime constants are those whose initialization values can only be resolved at runtime (when your program is running). Variables such as usersAge and myValue in the snippets above are runtime constants, because the compiler can’t determine their initial values at compile time. usersAge relies on user input (which can only be given at runtime) and myValue depends on the value passed into the function (which is only known at runtime). However, once initialized, the value of these constants can’t be changed.
Compile-time constants are those whose initialization values can be resolved at compile-time (when your program is compiling). Variable gravity above is an example of a compile-time constant. Compile-time constants enable the compiler to perform optimizations that aren’t available with runtime constants. For example, whenever gravity is used, the compiler can simply substitute the identifier gravity with the literal double 9.8.
When you declare a const variable, the compiler will implicitly keep track of whether it’s a runtime or compile-time constant.
In most cases, this doesn’t matter, but there are a few odd cases where C++ requires a compile-time constant instead of a run-time constant, for example in the instantiation of type -- something we’ll cover later.
constexpr
To help provide more specificity, C++11 introduced the keyword constexpr, which ensures that a constant must be a compile-time constant:
constexpr variables are const. This will get important when we talk about other effects of const in upcoming lessons.
const
Best practice
Any variable that should not be modifiable after initialization and whose initializer is known at compile-time should be declared as constexpr.
Any variable that should not be modifiable 4.13 -- 2.9 -- Introduction identifier is replaced by substitution). That way, if we need to update our classroom size, we won’t accidentally change the name length too.
So why not use #define to make symbolic constants? There are (at least) three };, and MAX_STUDENTS_PER_CLASS would not be watchable. You’d have to go find the definition of MAX_STUDENTS_PER_CLASS in order to know what the actual value was. This can make your programs harder to debug.
int max_students { numClassrooms * 30 };
int max_students { numClassrooms * MAX_STUDENTS_PER_CLASS };
Second, macros can conflict with normal code. For example:
If someheader.h happened to #define a macro named beta, this simple program would break, as the preprocessor would replace the int variable beta’s name with whatever the macro’s value was.
Thirdly, macros don’t follow normal scoping rules, which means in rare cases a macro defined in one part of a program can conflict with code written in another part of the program that it wasn’t supposed to interact with.
Warning
Avoid using #define to create symbolic constants macros.
A better solution: Use constexpr variables
A better way to create symbolic constants is through use of constexpr variables:
Because these are just normal variables, they are watchable in the debugger, have normal scoping, and avoid other weird behaviors.
Use constexpr variables to provide a name and context for your magic numbers.
Using symbolic constants throughout a multi-file 6.2 -- User-defined namespaces)
3) Add all your constants inside the namespace (make sure they’re constexpr in C++11/14, or inline constexpr in C++17 or newer)
4) #include the header file wherever you need it
For example:
constants.h (C++11/14):
In C++17, prefer “inline constexpr” instead:
constants.h (C++17 or newer):
Use the scope resolution operator (::) to access your constants in .cpp files:
main.cpp:.
It might be helpful to note that the "Using symbolic constants throughout a multi-file program" section only applies to const and constexpr's.
(I spent an hour trying to figure out why one of my test programs doesn't work and the reason was because it wasn't constexpr)
A "symbolic constant" that isn't a constant of some kind is just a global variable, not a symbolic constant. :)
constants are not variables. A couple of times you say constant variable. A variable varies (its value changes). A constant does not vary. I understand this is common usage, sigh.
another issue of #define is they are typeless. Its not normally a big deal but you can have ints being put into longs, floats into doubles, etc. Const and {} and constexpr help with strong typing.
As a beginner makes it a lot of sense calling it constant variable even if this is an oxymoron, cause you use the same procedure for defining a variable and adding const before. If you remove const what you have is a variable. With time the programmer will call it only constant.
Also remember that mutable class specifier allow const object to be modified.
This paragraph seems to be mistaken - 30 is what would show up in the debugger and MAX_STUDENTS_PER_CLASS would be what the compiler would compile: };,
Hi,
inline constexpr type variable{expression};
`inline is c++17 or higher`, that bring me into questions, why is there a different version of c++? do you mean a compiler? a different type of compiler? what's the different between them and which is better than the other or prefer better one than the other?
what is inline variable mea?
Inline variables are covered a little more in depth in lesson 6.8, "Global constants and inline variables."
Hi, thanks for such a great tutorial.
Can you explain the difference between constexpr and inline constexpr? I tried looking it up else where but there it said that constexpr explicitly evaluates at compile-time where as inline constexpr evaluates at run-time but won't that make it just like a normal const??
`constexpr` means that the variable _can_ be computed at compile-time. The compiler is free to choose when the computation takes place, no matter if an variable is `constexpr` or not.
`inline constexpr` gives the variable external linkage, but allows it to be defined multiple times. This allows placing the variable's definition in a header file and including it in multiple source files without violating the one-definition-rule.
Placing a non-`inline` `constexpr` variable in a header would work too, because `constexpr` variables have internal linkage, but the variable would be defined, and therefore take up memory, for each source file it's included in. If you include the header in 20 source file, you're potentially taking up 20 times the amount of memory you'd actually need.
This helps a lot. Thanks once again.
"This allows placing the variable's definition in a header file and including it in multiple source files without violating the one-definition-rule."
Soo... if I use 'inline', it'll virtually(everything is virtual though :v) place a copy of my code in multiple source file, without taking any extra space for those copy?? It's seems a bit wired how the compiler does it without taking that extra memory.
Doesn't using 'inline' increase the compilation time?
Thanks if Advance <3
> without taking any extra space for those copy?
If the function/variable actually gets inlined (copied to the caller), you can end up using much more space.
> Doesn't using 'inline' increase the compilation time?
Anything you do in a header can drastically increase compilation time, because the contents get compiled as part of every source file that includes the header.
Hello there
just wanna give you little suggestion
according to your suggestion in the best practice section, we should avoid magic number altogether.
But i still found magic number in your last code snippet here :
so how about we replace "radius" to "diameter" instead? In order to discard magic number 2.0 to calculate diameter.
Like this :
Or perhaps we could keep the "radius", but also create new particular function just to calculate the diameter?
Thanks Before... Great tutorial by the way
QUESTIONS :
1.) if we must not use #define then can #ifndef identify constant variables created by constexp ?
and can we then replace #define with constexp ?
#ifndef CONSTANTS_H
#define CONSTANTS_H --->> constexpr (string or something) CONSTANTS_H { }; ?
2.) what does inline before constexpr do ?
I'm not sure about what inline does but for your first question, the
#ifndef and
#define
are used so that when you include you self created header files into your program file the preprocessor doesn't end up copying the same code (from the header) into your main file twice by mistake (if more than 1 file used #include "your_header" ) as copying the same code twice can lead to compiler errors.
basically the macro that you are talking about is never going to be used in the code. It is there for the preprocessor.
I tried my best to explain but its probably not clear, so you should check out lesson 2.11- Header Guards
Hey nascar & Alex, could you please explain the use of constexpr in functions?
I don't understand when and why you should use it.
For example in the standard library (std::array):
Why is constexpr used here? When should I use it, and how do I know whether the function actually needs it at all?
Thank you :) Nice to see you again btw!
The more `constexpr` the better, usually. There are some downsides to it so you have to decide for yourself.
`constexpr` functions can be evaluated at compile-time. You can create a `constexpr` `std::array` and access its elements at compile-time.
The compiler knows the values of the elements, so the compiler can figure out the value of `i` at compile-time. The array and `i` disappear during compilation, the program is identical to
because the compiler was able to do everything apart from printing at compile-time.
Functions can only be marked as `constexpr` if they really can be evaluated at compile-time. A function that prints something to `std::cout` cannot be `constexpr` for example.
Thanks for the explaination nascar!
so basically a symbolic constant is the same as a constant variable?
oops sorry ignore this comment,i reread again and understood it. Side note, if i remembered correctly, last time able to delete comments
Hi! So when you say that the compiler doesn't know what value is passed to a function, what exactly does that mean? You stated that "myValue depends on the value passed into the function (which is only known at runtime)."
But, when the compiler is parsing the document, does it not read the line that contains the caller, and can it not determine the type of the value. For example, in this snippet of code:
I get an error: error: cannot convert 'std::string' {aka 'std::__cxx11::basic_string<char>'} to 'int'. So, I don't understand how the values passed into functions are not known until runtime. I don't believe this was elaborated on further in any other sections. Could you clarify a bit? Thanks!
By default, functions and variables are assumed to be evaluated at run-time. The compiler reads their definition, also reads their value (If initialized with a literal), but it doesn't need to keep track of the value, that's the task of the run-time.
The compiler knows `doSomething` will be called with the value stored in `a`, but it doesn't need to know which value `a` has. The compiler writes "Write 123 to variable `a`" and "look into `a` and pass its value to `doSomething`" into the binary. It doesn't need to know `a`s value when it writes the instruction to call `doSomething`.
1.
> int age;
at code block 5, line 2 should be
> int age{};
for consistency.
2. Why don't new projects in Visual Studio use the C++17 compliant compiler by default?
Instead of creating variables in namespace with inline & constexpr; using a Helper class/struct to wrap static constexpr variable would be more readable.
Hello,
First of all, thank you for making learncpp.com! It’s a amazing tool for learning C++!
I’m starting to get quite confused about when it’s ok to use magic numbers it’s not. I’m sorry if my following questions are are badly formulated(english isn’t my first language). Or coming off as ignorant, there is still a lot I don’t know about programming in general and C++ specifically.
#1 my understanding of what a magic number is:
I had python at uni but my professor never told us anything about magic numbers, so this sub chapter was the first time I heard about them. As far as I understand is a magic number “A magic number is any number that is not given a name.” I provide some examples of what would be magic numbers below.
Based on my understanding would 2, 4, 4, 4, 2 and 0 in Example 1 all be regarded magical numbers. My reasoning for this is that its hard to see witch animal the number relate from the initializations of the array alone.
In Example 2 are 1 and 100 in line 43 directly used but commented, witch increases the readability of the code but it’s still as far as I understand still magical numbers and should be defined in a variable for increased readability and ease of maintenance.
Example 1 - From solution of task 2 in the Quiz in chapter 6.2 — Arrays (Part II)
Example 2 - From solution to task 2 in 5.x — Chapter 5 comprehensive quiz
#2 I’m now at 6.3 - Arrays and loops. I’m I right in thinking that the reason that the examples uses so many magical numbers is that the intent of the example is to show how the different concepts works in a as compact form as possible and that’s why you don’t define all the magical numbers?
#3 Is it any cases where using magical numbers actually is prepared? I see that a lot of people on Quora and in general uses quite a bit of magical numbers. Do you think that this is just for convenience while coding or is it a good reason to do so?
Hello,
magic numbers are all values that are based on something else in your code or are not obvious from their context. There is no strict definition of magic numbers, so you'll come across different explanations. Comments don't make literals non-magic. Code should be readable without comments.
Example 1: Not magic. These values are correlated to the animals by index. Although it's not easy to see which animal has which number of legs, the meanings of the numbers is obvious (Number of legs).
Example 2: Not magic. As in #1, the reader can look up which parameters `std::uniform_int_distribution` takes. If you were to use these values again to print the message in line 50, they'd become magic, because they could change independently but mean the same thing. If you wanted to print those numbers, you should use `die.a()` and `die.b()` or add constants.
But there's something else in this code. The 'y' in `playAgain` is used more then once, but changing one or the other wouldn't make sense. Clearly, a constant should be used instead. I updated that lesson to use a `while (true)` loop to get rid of the duplicate.
If a values makes a reader think "Why this value?", it's magic.
I think it's easiest if you post your quiz solutions in the lessons you're in right now and I'll tell you what's magic and what's not.
Missed the #include<iostream> and int main()
I am getting an error in this.why?
the error message is ......
Error code:C2678 desription:binary '<<': no operator found which takes a left-hand operand of type 'std::istream' (or there is no acceptable conversion) project:Testing Subject file:C:\Users\HP PAVILLION_2\source\repos\Testing Subject\Testing Subject.cpp line:6
here's the code......
and also I compiled it and works after I close the console and to the error list it still shows the error
There is no such error in your code. Clean the project, restart your IDE
"because the compiler can’t determine their initial values at compile time. "
How come can't the compiler determine those initial values at compile time and STILL won't complain while when we declare an array with size of type a variable "int n {10}; int x[n];" it will complain? Don't compiler have any idea about variables' values during compile time?
Why can't a compiler know the value of n is 10 "int n{10};" as it is not a user input or a function parameter to be known during run-time ?
Making exceptions is bad. It makes code and compiler implementation a lot more complicated.
Right now, we say "If a variable is `constexpr`, it's a compile-time constant". Otherwise, it's not. (There is, unfortunately, an exception to this, duh).
By allowing regular variables to be used in a constant context, it'd have to be specified under which condition it's allowed, so you'd end up with something like "If a variable is `constexpr`, or it is only used in a read-only manner before it is used in a constant context and it is initialized with a value or variable that is usable at compile-time, it's a compile-time constant".
Needless to say, the first version is a lot simpler. People (and the compiler) don't have to read your code to know if a variable is usable at compile-time. It's only usable at compile-time if you say so (Plus the exception I mentioned before, but won't go into detail).
Is that exception "constant variables of integral type with a constexpr initializer"? And is that exception inherited from C?
Yes that's the exception I was thinking of. I don't think C can do this, but I don't speak C.
I believe the nomenclature of calling a constant a 'constant variable' as used in this lesson is incorrect. A variable is called as such because it can vary. A constant cannot. A constant and a variable are distinctly different and opposite.
it is called a variable because its value still can be changed by the programmer......eat some asparagus,boi.
For the example of creating a namespace constants, I got an error when I defined a function with the same name constants. So anyone knows about namespace and function name conflict?
Please post your code and exact error message.
Hello,I've several questions, would be thankful if you could answer these:
1:in the part you are talking about "making a function parameter const" you say:it tells the person calling the function that the function will not change the value of myValue. Second, it ensures that the function doesn’t change the value of myValue. aren't these two reasoning the same ?
2:before you talked about constexpr you provided an example which you are using std::bitset in a form.can I ask what it is and what does this piece of code do "std::bitset<numberOfBits> b{};"
3:can you please provide an example for the third major problem of using object-like macros to define symbolic constants which says: Thirdly, macros don’t follow normal scoping rules, which means in rare cases a macro defined in one part of a program can conflict with code written in another part of the program that it wasn’t supposed to interact with.
4:what's the difference between inline-constexpr & constexpr.
thanks & sorry, I'm bombarding you with my questions lately.
1.
> it tells the person calling the function that the function will not change the value of myValue
This doesn't make sense yet. When you get to references, you'll understand the difference.
2.
Seems like I mixed up some lessons. `std::bitset` is covered later, I removed the example from this lesson.
3.
4.
`inline` variables can have multiple identical definitions, but only 1 variable will actually be created. Without `inline`, you'd create a variable (Which uses up memory) for each definition. For non-const variables, `inline` can also be used to bypass the one-definition-rule.
DEFINE EVERYTHING IN A NAMESPACE! (KIDDING, WOULD RESULT IN BREAKING.)
For what is it called "symbolic"? It’s a "define" function.
Function -numClassrooms- was not declared.
How do I post a code snippet on here? I tried to code inside [] but it posts it as text.
Use the code tags [;code] and [;/code]
but delete the semicolon first then put your code between the code tags
Can constexpr be declared inside namespaces and forward declared?
constants.cpp
constants.h
main.cpp
C:\Users\admin\Documents\cbp\test\constants.h|6|error: uninitialized const 'myconstants::myconst' [-fpermissive]|
`constexpr` variables have to be initialized, they can't be forward declared.
You can define them in a namespace.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/const-constexpr-and-symbolic-constants/ | CC-MAIN-2021-17 | refinedweb | 3,720 | 64 |
On Sep 07, 2002 21:59 +0200, Peter T. Breuer wrote:> "A month of sundays ago Lars Marowsky-Bree wrote:"> >.> > Well, I know of some of these. Intermezzo I've tried lately and found> near impossible to set up and work with (still, a great improvement> over coda, which was absolutely impossible, to within an atom's> breadth). And it's nowhere near got the right orientation. Lustre> people have been pointing me at. What happened to petal?Well, Intermezzo has _some_ of what you are looking for, but isn'treally geared to your needs. It is a distributed _replicated_filesystem, so it doesn't necessarily scale as good as possible forthe many-nodes-writing case.Lustre is actually very close to what you are talking about, which Ihave mentioned a couple of times before. It has distributed storage,so each node could write to its own disk, but it also has a coherentnamespace across all client nodes (clients can also be data servers),so that all files are accessible from all clients over the network.It is designed with high performance networking in mind (Quadrics Elanis what we are working with now) which supports remote DMA and such.> But what I suggest is finding a simple way to turn an existing FS into a > distributed one. I.e. NOT reinventing the wheel. All those other people> are reinventing a wheel, for some reason :-).We are not re-inventing the on-disk filesystem, only adding the layerson top (networking, locking) which is absolutely _required_ if you aregoing to have a distributed filesystem. The locking is distributed,in the sense that there is one node in charge of the filesystem layout(metadata server, MDS) and it is the lock manager there, but all of thestorage nodes (called object storage targets, OST) are in charge of locking(and block allocation and such) for files stored on there.You can't tell me you are going to have a distributed network filesystemthat does not have network support or locking.> Well, how about allowing get_block to return an extra argument, which> is the ondisk placement of the inode(s) concerned, so that the vfs can> issue a lock request for them before the i/o starts. Let the FS return> the list of metadata things to lock, and maybe a semaphore to start the> i/o with.In fact, you can go one better (as Lustre does) - the layout of the datablocks for a file are totally internal to the storage node. The clientsonly deal with object IDs (inode numbers, essentially) and offsets inthat file. How the OST filesystem lays out the data in that object isnot visible to the clients at all, so no need to lock the wholefilesystem across nodes to do the allocation. The clients do not writedirectly to the OST block device EVER.> > What you are starting would need at least 3-5 years to catch up with what> > people currently already can do, and they'll improve in this time too. > > Maybe 3-4 weeks more like.LOL. This is my full time job for the last 6 months, and I'm just a babein the woods. Sure you could do _something_, but nothing that would getthe performance you want.> > A petabyte filesystem without journaling? A petabyte filesystem with a> > single write lock? Gimme a break.> > Journalling? Well, now you mention it, that would seem to be nice. But> my experience with journalling FS's so far tells me that they break> more horribly than normal. Also, 1PB or so is the aggregate, not the> size of each FS on the local nodes. I don't think you can diagnose> "journalling" from the numbers. I am even rather loath to journal,> given what I have seen.Lustre is what you describe - dozens (hundreds, thousands?) of independentstorage targets, each controlling part of the total storage space.Even so, journaling is crucial for recovery of metadata state, andcoherency between the clients and the servers, unless you don't thinkthat hardware or networks ever fail. Even with distributed storage,a PB is 1024 nodes with 1TB of storage each, and that will still takea long time to fsck just one client, let alone return to filesystemwide coherency.> No features. Just take any FS that corrently works, and see if you can> distribute it. Get rid of all fancy features along the way. You mean> "what's wrong with X"?Again, you are preaching to the choir here. In principle, Lustre doeswhat you want, but not with the "one big lock for the whole system"approach, and it doesn't intrude into the VFS or need no-cache operationbecause the clients DO NOT write directly onto the block device on theOST. The DO communicate directly to the OST (so you have basicallylinear I/O bandwidth scaling with OSTs and clients), but the OST usesthe normal VFS/filesystem to handle block allocation internally.> Well, it won't be mainstream, for a start, and> that's surely enough. The projects involved are huge, and they need to> minimize risk, and maximize flexibility. This is CERN, by the way.We are working on the filesystem for MCR (),a _very large_ cluster at LLNL, 1000 4way 2.4GHz P4 client nodes, 100 TBof disk, etc. (as an aside, sys_statfs wraps at 16TB for one filesystem,I already saw, but I think I can work around it... ;-)> There is no difficulty with that - there are no distributed locks. All> locks are held on the server of the disk (I decided not to be> complicated to begine with as a matter of principle early in life ;-).See above. Even if the server holds all of the locks for its "area" (aswe are doing) you are still "distributing" the locks to the clients whenthey want to do things. The server still has to revoke those locks whenanother client wants them, or your application ends up doing somethingsimilar.Cheers, Andreas--Andreas Dilger unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2002/9/7/116 | CC-MAIN-2015-11 | refinedweb | 1,017 | 71.55 |
We?
Lookup tables (.LUT) and HL7 schemas (.HL7) are more code and less configuration. They should be exported and kept in sync between Atelier and namespace much inline with how .cls files are handled. It is not practical to manually export / import these via Management portal every time code needs to be committed to source control.
Are you "committed" to using GIT as your source control? If not, maybe take a look at Deltanji from George James Software (my employer).
We are "committed" to using GIT as we have invested time in learning it already plus it is free. I believe Deltaji is a paid product.
Will Deltanji be able to commit .LUT and .HL7 files natively?
Yes, Deltanji handles LUT and HL7 components natively. And yes, it's a paid product, though there is a free Solo edition.
Since you're using Ensemble, you may also like to know that Deltanji integrates with the Portal-based editors and with Studio, as well as with Atelier. It is server-side source control, so has no problem with scenarios where multiple developers work as a team in a single namespace.
What are feature restrictions between free and paid version?
There's a table at
The free Solo edition is intended for, well, solo situations, with one developer wanting straightforward checkout/checkin source versioning for their namespaces on their local Cache / Ensemble instance.
In your context the Team or Enterprise editions are more likely to be appropriate, particularly since the comparison table shows LUTs not being supported by Solo.
Deltanji is capable of a lot more than simple code versioning. Please contact me via the George James Software website if you'd like to evaluate Team or Enterprise editions, or if the capabilities of the Deploy edition are of interest to you.
Is anyone from Intersystems able to provide a response?
Can this feature be expected in Atelier release 1.3?
Your Sales Rep or Sales Engineer should also be a trustworthy resource.
(probably not on Sunday morning / afternoon) | https://community.intersystems.com/post/unable-export-lookup-tables-lut-and-hl7-schemas-hl7-directly-atelier-commit-source-control-git | CC-MAIN-2019-13 | refinedweb | 337 | 66.03 |
Some architecture need to maintain a kmem cache for thread infostructures. (next patch adds that to powerpc to fix an alignmentproblem).There is no good arch callback to use to initialize that cachethat I can find, so this adds a new one and adds an empty macrofor when it's not implemented.Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>---So we have the choice here between: - the ifdef on the func name that I did, consistent with whatI did before for iomap, which iirc Linus liked - add some more ARCH_HAS_* or HAVE_* (yuck) - add an empty definition to all archs .h (pain in the neck but Ican do it, though it will be an annoying patch to keep around) - do a weak function (will slightly bloat everybody for no good reason)So unless there is strong complaints, I'd like to stick to mycurrent approach. include/linux/sched.h | 4 ++++ init/main.c | 1 + 2 files changed, 5 insertions(+)--- linux-work.orig/init/main.c 2008-04-10 13:11:06.000000000 +1000+++ linux-work/init/main.c 2008-04-10 13:11:19.000000000 +1000@@ -623,6 +623,7 @@ asmlinkage void __init start_kernel(void if (efi_enabled) efi_enter_virtual_mode(); #endif+ thread_info_cache_init(); fork_init(num_physpages); proc_caches_init(); buffer_init();Index: linux-work/include/linux/sched.h===================================================================--- linux-work.orig/include/linux/sched.h 2008-04-10 13:11:44.000000000 +1000+++ linux-work/include/linux/sched.h 2008-04-10 13:12:05.000000000 +1000@@ -1893,6 +1893,10 @@ static inline unsigned long *end_of_stac #endif +#ifndef thread_info_cache_init+#define thread_info_cache_init do { } while(0)+#endif+ /* set thread flags in other task's structures * - see asm/thread_info.h for TIF_xxxx flags available */ | http://lkml.org/lkml/2008/4/9/348 | CC-MAIN-2015-11 | refinedweb | 276 | 59.4 |
28 December 2009 08:41 [Source: ICIS news]
By Clive Ong
SINGAPORE (ICIS news)--Asian demand for styrenic resins, especially expandable polystyrene (EPS), is expected to be weak in the first quarter of next year amid a seasonal lull, market players said on Monday.
Demand for polystyrene (PS) and acrylonitrile-butadiene-styrene (ABS), while better than EPS, is also expected to wane in January, they said.
Market participants said that buying momentum had weakened considerably in December as most factories in China had completed their business for the year while other producers had sufficient inventories to tide them over till next year.
Traders had also kept lean inventories in the face of weak buying momentum and most had liquidated their stocks in November to maintain low inventory levels during December and January.
The poor demand in the styrenics sector was in stark contrast to a bullish uptrend in feedstock styrene monomer (SM). Spot values of SM increased 25% since October.
Trading increased further in November and December. However, the trading fervour did not spill over into the styrenics sector.
"It is difficult to match the big price uptrend in SM, especially when it is the slow season for plastics", a trader in ?xml:namespace>
Resin suppliers are also worried about their inability to raise prices in line with rising SM costs.
"The spread between EPS and SM had dropped to $100/tonne (€70/tonne) or less, compared to the $180/tonne a supplier typically targets," a producer in
The spread between general purpose PS and SM also declined below $80/tonne compared to the usual $100/tonne.
A PS producer in
Spot ABS prices were $300/tonne above SM costs in the second half of December. However, producers said the spread was insufficient to cover costs, mainly due to the rising values of other feedstock like acrylonitrile (ACN) and butadiene (BD) at above $1,800/tonne CFR NE Asia and $1,700/tonne CFR NE Asia respectively.
An ABS producer said: "With feedstock values all at elevated levels, we need more than $300/tonne spread to cover our production costs."
Resin producers are expecting demand to be spurred by a recovery in the global economy in the first half of next year but most producers said that any rebound would likely occur after the Lunar New Year holidays next year.($1 = €0.70)
For more on ABS, PS and E | http://www.icis.com/Articles/2009/12/28/9321823/outlook-10-asian-styrenics-off-to-a-slow-start-in-2010.html | CC-MAIN-2015-22 | refinedweb | 400 | 55.68 |
Re: [PATCH] check_/request_region fixes & request for enlightenment
From: Guennadi Liakhovetski (g.liakhovetski_at_gmx.de)
Date: 06/17/04
- ]
Date: Thu, 17 Jun 2004 21:27:59 +0200 (CEST) To: Jesper Juhl <juhl-lkml@dif.dk>
On Wed, 16 Jun 2004, Jesper Juhl wrote:
> in the isp16.c case the region is free'ed in isp16_exit(), but one thing I
That looks like a bug - release_region() without request_region().
> don't understand is is I'm supposed to preserve the pointer returned by
> request_region and later pass that to release_region as well to make sure
> the right resource is free'ed? I see __release_region() taking 3
> parameters, but the release_region #define only takes two??? Could someone
> explain to me how that works?
No. You pass the same address and size you used to request the region.
> For now I assumed the current release_region in isp16_exit() is OK, and
> the code compiles fine with my changes, but I can't test it since I don't
> have the hardware.
> Here's the patch against 2.6.7 for that file - comments are very welcome :
You also have to release_region() in all failure-cases in the
isp16_init(). Also, I think, if the region is busy you should return
-EBUSY, but there I am not too sure.
> --- linux-2.6.7-orig/drivers/cdrom/isp16.c 2004-06-16 07:20:04.000000000 +0200
> +++ linux-2.6.7/drivers/cdrom/isp16.c 2004-06-16 22:36:52.000000000 +0200
> @@ -121,7 +121,7 @@ int __init isp16_init(void)
> return (0);
> }
>
> - if (check_region(ISP16_IO_BASE, ISP16_IO_SIZE)) {
> + if (!request_region(ISP16_IO_BASE, ISP16_IO_SIZE, "isp16")) {
> printk("ISP16: i/o ports already in use.\n");
> return (-EIO);
> }
>
>
> Now, in trm290.c it seems that the region it tries to access is already
> requested in probe_hwif() in ide-probe.c, so the check_region is only used
> as an extra check to print an error message. I see no code in trm290.c
> that ever tries to release the region, is that wrong, or does it simply
> rely on the code in ide-probe.c to release the region for it?
> How can it be that the check_region doesn't always fail if it has already
> been locked in ide-probe.c ? And if it doesn't always fail, shouldn't the
It is requested later:
trm290_init_one
ide_setup_pci_device
first calls
do_ide_setup_pci_device
ide_pci_setup_ports
d->init_hwif(hwif) = init_hwif_trm290
***check_region***
then it calls
probe_hwif_init
probe_hwif
ide_hwif_request_regions
where regions are requested. So, I would just completely through "check
region" away. A bit uncomfortable is that the driver seems to need to
configure the chip with the address:
hwif->OUTW(compat|1, hwif->config_data);
new = hwif->INW(hwif->config_data);
and that doesn't look very nice before request_region(). And there's no
other hook between request_region() in probe_hwif and the first use of the
chip, so, I am not quite sure if it is safe here to just blindly configure
the chip with the port address...
Please, corect me anybody if I am wrong.
Guennadi
> driver release it itself right after doing it's check (now) with
> request_region?
>
> Here's what I came up with initially - just replacing check_region with
> request_region - but I have a feeling it's not that simple.
> Would anyone care to clarify?
>
> --- linux-2.6.7-orig/drivers/ide/pci/trm290.c 2004-06-16 07:19:01.000000000 +0200
> +++ linux-2.6.7/drivers/ide/pci/trm290.c 2004-06-16 22:57:24.000000000 +0200
> @@ -373,12 +373,12 @@ void __devinit init_hwif_trm290(ide_hwif
> /* leave lower 10 bits untouched */
> compat += (next_offset += 0x400);
> # if 1
> - if (check_region(compat + 2, 1))
> - printk(KERN_ERR "%s: check_region failure at 0x%04x\n",
> + if (!request_region(compat + 2, 1, "trm290"))
> + printk(KERN_ERR "%s: request_region failure at 0x%04x\n",
> hwif->name, (compat + 2));
> /*
> * The region check is not needed; however.........
> - * Since this is the checked in ide-probe.c,
> + * Since this is checked in ide-probe.c,
> * this is only an assignment.
> */
> # endif
>
>
> I've been digging for a while looking for some documentation on this, but
> found nothing more than brief mentions about it in Documentation/
> Searching lkml archives and reading the
> check_region/request_region code provided enough info to give me the
> understanding I described above, but if someone has knowledge of some
> documentation/explanation on this I'd love a pointer so I can read up.
>
>
> --
> Jesper Juhl <juhl-lkml@dif.d" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at
- ] | http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-06/3985.html | crawl-002 | refinedweb | 744 | 65.83 |
Programming in D for C Programmers
- Labeled versus Wide Characters
- Arrays that parallel an enum
- Creating a new type with typedef
- Comparing structs
- Comparing strings
- Sorting arrays
- => bool D WayThe length of an array is accessible through the property "length".
int array[17]; foreach (i; 0 .. array.length); int x; array.length = array.length + 1; array[array.length - 1] = x;
String Concatenation
The C WayThere are several difficulties to be resolved, like when can storage be freed,)); foo(void);
The D WayD is a strongly typed language, so there is no need to explicitly say a function takes no arguments, just don't declare it has having arguments.
void foo() { ... }
Labeled labeled
D has a syntax for setting the alignment that is common to all D compilers. The actual alignment done is compatible with the companion C compiler's alignment, for ABI compatibility. To match a particular layout across architectures, use align(1) and manually specify it.D has both C-style string literals which can use escaping, and WYSIWYG (what you see is what you get) raw strings usable with the foo and r"bar" syntax:
string file = r"c:\root\file.c"; // c:\root\file.c string quotedString = `"[^\\]*(\\.[^\\]*)*"`; // "[^\\]*(\\.[^\\]*)*"The famous hello world string becomes:
string hello = "hello world\n";
ASCII versus Wide Characters
Modern programming requires that wchar strings be supported in an easy way, for internationalization of the programs.
The C WayC uses the wchar_t and the L prefix on strings:
");
The D WayThe type of a string is determined by semantic analysis, so there is no need to wrap strings in a macro call. Alternatively if type inference is used the string can have a c, w or d suffix, representing UTF-8, UTF-16 and UTF-32 encoding, respectively. If no suffix is used the type is inferred to be a UTF-8 string:
string utf8 = "hello"; // UTF-8 string wstring utf16 = "hello"; // UTF-16 string dstring utf32 = "hello"; // UTF-32 string auto str = "hello"; // UTF-8 string auto _utf8 = "hello"c; // UTF-8 string auto _utf16 = "hello"w; // UTF-16 string auto _utf32 = "hello"d; // UTF-32 string } string..
The D WayD has powerful metaprogramming abilties which allow it to implement typedef as a library feature. Simply import std.typecons and use the Typedef template:
import std.typecons; alias Handle = Typedef!(void*); void foo(void*); void bar(Handle); Handle h; foo(h); // syntax error bar(h); // okTo handle a default value, pass the initializer to the Typedef template as the second argument and refer to it with the .init property:
alias Handle = Typedef!(void*, cast(void*)-1); Handle h; h = func(); if (h != Handle.init) ...There's only one name to remember: Handle.
Comparing structs
The C WayWhile C defines struct assignment in a simple, convenient manner:.
The D WayD does it the obvious, straightforward way:
A x, y; ... if (x == y) ...
Comparing strings
The C WayThe library function strcmp() is used:
char str[] = "hello"; if (strcmp(str, "betty") == 0) // do strings match? ...C uses 0 terminated strings, so the C way has an inherent inefficiency in constantly scanning for the terminating 0.
The D WayWhy not use the == operator?
string str = "hello"; if (str == "betty") ...
D strings have the length stored separately from the string. Thus, the implementation of string compares can be much faster than in C (the difference being equivalent to the difference in speed between the C memcmp() and strcmp()).
D supports comparison operators on strings, too:
string str = "hello"; if (str < WayD has a powerful std.algorithm module with optimized sorting routines, which work for any built-in or user-defined type which can be compared:
import std.algorithm; type[] array; ... sort(array); // sort array in-place
Consider
This
Consider.
void apply(void *p, int *array, int dim, void (*fp)(void *, int)) { for (int i = 0; i < dim; i++) fp(p, array[i]); } struct Collection { int array[10]; }; void comp_max(void *p, int i) { int *pmax = (int *)p; if (i > *pmax) *pmax = i; } void func(struct Collection *c) { int max = INT_MIN; apply(&max, c->array, sizeof(c->array)/sizeof(c->array[0]), comp_max); }
While this works, it isn't very flexible.; } | https://docarchives.dlang.io/v2.070.0/ctod.html | CC-MAIN-2019-04 | refinedweb | 692 | 53.92 |
xmlrpclib.Server
#!/bin/python2.7
..
.. Text chopped out here
..
##
# Standard server proxy. This class establishes a virtual connection
# to an XML-RPC server.
# <p>
# This class is available as ServerProxy and Server. New code should
# use ServerProxy, to avoid confusion.
#
# @def ServerProxy(uri, **options)
# @param uri The connection point on the server.
# @keyparam transport A transport factory, compatible with the
# standard transport class.
# @keyparam encoding The default encoding used for 8-bit strings
# (default is UTF-8).
# @keyparam verbose Use a true value to enable debugging output.
# (printed to standard output).
# @see Transport
class ServerProxy:
"""uri [,options] -> a logical connection to an XML-RPC server
uri is the connection point on the server, given as
scheme://host/target.
The standard implementation always supports the "http" scheme. If
SSL socket support is available (Python 2.0), it also supports
"https".
If the target part and the slash preceding it are both omitted,
"/RPC2" is assumed.
The following options can be given as keyword arguments:
transport: a transport factory
encoding: the request encoding (default is UTF-8)
All 8-bit strings passed to the server proxy are assumed to use
the given encoding.
"""
..
.. Text chopped out here
..
Server = xmlrpclib.ServProxy
python2.7 -c "from httplib import HTTPS"
import xmlrpclib
server_url = ''
server = xmlrpclib.Server(server_url); | http://www.linuxquestions.org/questions/programming-9/different-versions-of-python-875538/ | CC-MAIN-2017-30 | refinedweb | 212 | 52.05 |
Difference Between StringBuffer and StringBuilder
The string peer class is a string buffer. String buffer consists of a string’s functionalities; it occupies more functionality and features than a string. We all know that string possesses immutable and fixed-length character, whereas string buffer possesses the characters, such as increasable and the writable series. The advantage of string buffer is that it automatically grows when there is a need to add characters and substrings in between or at the bottom. String builder is one of java’s mutable string objects. String builders can be modified at any point through method invocations.
There won’t be any synchronization in a string builder that resembles to be a .not safe thread.
String buffers vs String builders are almost the same, but the String builder is not that safe and not synchronized; if the synchronization is required, then java uses the string buffer. Due to the additional features that are permittable for a programmer advantage, a new edition in Java 1.5, a string builder, is added. And along with this, there are some variations that can be discussed in this article.
Head to Head Comparison between StringBuffer and StringBuilder (Infographics)
Below are the top 4 differences between StringBuffer vs StringBuilder
Key Differences between StringBuffer and StringBuilder
Let us discuss some of the major differences between StringBuffer vs StringBuilder:
- String Buffer is introduced in Java 1.4 versions. But String builder is the latest, which is introduced in 1.5 versions of java.
- String buffer can be synchronized, but the String builder is non-synchronized.
Thread safety is there in String buffer, but there is no thread-safety in String Builder.
- A string buffer is slower when compared with the String Builder.
- For String concatenation, String builder has to be used, and it also helps in dynamic string creation, but if you are more in need, only String buffer has to be used.
- Depending on the version of java we are using, Plus (+) operator implicitly uses string buffer or string builder during the string concatenation. Suppose if the version of java we are using is 5 or higher than 5, then string builder has to be used, and if the version is lower than 5, then string buffer has to be used.
Example for String Buffer
Java Program to demonstrate the use of StringBuffer class.
public class BufferTest{
public static void main(String[] args){
StringBuffer buffer=new StringBuffer("hello");
buffer.append("java");
System.out.println(buffer);
}
}
Example for String Builder
Java Program to demonstrate the use of StringBuilder class.
public class BuilderTest{
public static void main(String[] args){
StringBuilder builder=new StringBuilder("hello");
builder.append("java");
System.out.println(builder);
}
}
The extra methods such as substring, capacity, trim to size, length, etc. can be visible only in the string buffer, but these methods are not implemented in string builder since it has by default.
StringBuffer vs StringBuilder Comparison Table
below is the top most comparison between StringBuffer vs StringBuilder
Conclusion
Due to the drawbacks in String buffer, a new string class called String builder has created in java 1.5 versions. Due to the strong implementations of methods, for certain cases, the String builder can be used instead of String buffer since the string builder’s performance is very high and fast compared to a string buffer. It’s a known fact; String builder is almost similar to the string buffer, where certain factors differentiate StringBuffer vs StringBuilder. We should be aware of where to use a string buffer and where string builder has to be used.
For instance, in a non-synchronized process, a string builder has to be used for better results along with the high speed of execution, whereas if we need the result in the synchronization environment, we should seamlessly make use of string buffer. Consider the factor that we have seen above about thread safety. If you are not much worried about thread safety, it’s a better choice to go ahead with the string builder; else, string buffer is the best choice for thread safety in the existing scenario.
But unlike the differences, they both come up with mutable characters. And if you have a close look at the syntax or code of class StringBuilder. java, you would be having a clarification that it’s a non-synchronized string buffer only. By this, you might have better clarity on the usage of both classes, i.e. string buffer and string builder.
Recommended Articles
This has been a guide to the top difference between StringBuffer vs StringBuilder. Here we also discussed the key differences with infographics and comparison table. You may also have a look at the following articles to learn more. | https://www.educba.com/stringbuffer-vs-stringbuilder/ | CC-MAIN-2022-33 | refinedweb | 784 | 61.77 |
UFDC Home
|
Search all Groups
|
Digital Library of the Caribbean
|
Digital Library of Haitian Resources
|
Haitian Law Digital Collection
|
Law Library Microform Consortium
|
No port in a storm : the misguided use of in-country refugee processing in Haiti
Permanent Link:
Material Information
Title:
No port in a storm : the misguided use of in-country refugee processing in Haiti
Physical Description:
Mixed Material
Publisher:
N.Y. : Americas Watch :etc. 1993
Notes
General Note:
4-tr-Am.W.-1993
Record Information
Source Institution:
Columbia Law Library
Holding Location:
Columbia Law Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
LLMC31543
System ID:
AA00000874:00001
This item is only available as the following downloads:
( MD5 )
Full Text
This volume was donated to LLMC to enrich its on-line offerings and for purposes of long-term preservation by
Columbia University Law Library
AMERICAS WATCH A Division of Human Rights Watch 485 Fifth Avenue, New York 10017 {212} 372-8400, fox (212) 972-0905 1522 K Street Washington D.C 20005 $202) 571.6592, fax <20Z) 371-0124
NATIONAL COALITION FOR HAITIAN REFUGEES
16 East 42nd Street, New York I00I7
JESUIT REFUGEE SERVICE/USA
1424 Sixteenth Street N.W., Washington D.C. 20036 (202) 462-0400, fax (202) 328-9212
September 1993 Volume 5, Issue 7
No Port in a Storm
The Misguided Use of In-Country ^ <^ Refugee Processing in Haiti v
Contents
Summary of Findings.................................................................. 1
I. Introduction....................................................................... 3
II. History of U.S. Policy Toward Haitian Refugees............................................ 5
III. In-Country Processing in Haiti........................................................ 7
A. Background ................................................................ 7
B. Operational Structure......................................................... 8
C. Recent Expansion............................................................ 9
D. Current Functioning.......................................................... 9
IV. A Critical Assessment of the ICP Program...........,....................................14
A. The Central Role and Biased View of the
State Department........................................................14
B. Political Isolation Weakens the Program ...........................................17
C. No Safe Haven Component is Available............................................17
D. Operational Deficiencies .......................................................19
E. Inconsistency in Adjudication....................................................21
F. Representative Cases..........................................................26
G. Some Asylum Seekers Will Not Risk Applying.......................................30
V. Interdiction, Forced Return and In-Country Processing.......................................31
VI. Conclusions ......................................................................33
VII. Recommendations.................................................................35
Acknowledgments.....................................................................36
SUMMARY OF FINDINGS *S CI3
When the September 30, 1991 military coup d'itat exiled Haiti's democratically elected president and unleashed some of the most brutal repression in Haitian history, the U.S. government went to new extremes in curtailing the rights of Haitian asylum seekers. The damage done by this misguided and discriminatory refugee policy will persist long after a political settlement is achieved in Haiti.
For many years, the United States government has been interdicting Haitians on the high seas and returning them to Haiti with only minimal efforts at screening for refugee status. This policy, coupled with discriminatory treatment of Haitian asylum seekers in the U.S., has been the focus of longstanding criticism and a stream of legal challenges.
The Bush Administration's response to the September 1991 political crisis was feeble and to the refugee crisis, reprehensible. The United States joined other nations in the western hemisphere in condemning the coup, refusing to recognize the new military-backed government and imposing sanctions. However, after an initial hesitation, and in spite of widespread human rights violations and generalized violence, the interdiction policy continued. The exception was a short interlude when Haitians picked up at sea were taken to Guantanamo Bay to be screened for asylum seekers after a Florida federal district judge imposed a temporary restraining order halting forced repatriations. In February 1992, the Bush Administration established an in-country processing (ICP) program through the U.S. Embassy in Port-au-Prince. That same month, the Supreme Court lifted the ban on the involuntary return (refoulement) of Haitian refugees.
The parameters of debate shifted dramatically, however, when on May 24, 1992, then-President Bush ordered all Haitians to be interdicted on the high seas and summarily returned to Haiti, with no prior screening for refugees fearing persecution. ICP, which had historically been conceived as an additional avenue of protection for refugees in selected countries, became the only option for victims of Haiti's repressive military regime.
U.S. foreign policy and refugee policy have been historically inseparable and interdependent. The case of Haiti, and Haitians, is no exception. Newly elected President Clinton, who had made compaign promises to rectify the illegal and irresponsible refugee policy, opted instead to continue it. His administration justified this reversal by raising the spectre of a huge, uncontrollable invasion of economic refugees and by arguing that the policy saved lives.
The Clinton Administration has undeniably contributed to progress made thus far in the reinstatement of constitutional government. Nevertheless, the pre-inauguration announcement that the policy of forcibly returning refugees would continue, with the support of President Aristide, was inconsistent with the Administration's stated commitment to seeking justice in Haiti. Increased efforts on the political front became the excuse for forfeiting the rights of the refugees.
In January, the incoming and outgoing administrations agreed to blockade the island with U.S. Coast Guard cutters, Navy ships and helicopters in order to prevent refugee flight. Clinton's administration went so far as to defend the policy of forced return, successfully, before the Supreme Court, leaving the heretofore globally recognized principle of rum-refoulement in a shambles. It further proposed to expand and improve ICP, thereby attaining what has since been touted as "complete coverage" for Haitian asylum seekers. Thus, in an ironic twist, non-refoulement is considered irrelevant to a major refugee crisis, and ICP, for the first time in its history, is considered an appropriate sole remedy.
In March 1993, The Inter-American Commission on Human Rights of the Organization of American States issued an interim resolution in response to a petition pending before it challenging the U.S. government's Haitian interdiction program. The resolution found that the interdiction policy is in violation of international law and should be suspended immediately.
In spite of observable improvements made this year in the program, ICP in Haiti, while certainly able to help some people, cannot be considered an adequate sole remedy for asylum seekers. It is both a product and a victim of the flawed and politicized view of the Haitian refugee crisis held by the U.S. government, and as such, is isolated from and distrusted by international and local refugee experts and human rights organizations, not to mention the very people it is meant to assist.
The State Department runs the program and is responsible for every aspect of it. The Immigration and Naturalization Service (INS) handles the actual case adjudication, which is heavily influenced by the faulty premise behind the program and overly reliant on the State Department including for information on country conditions and Haitian culture. Human rights analysis from the State Department is contradictory and at times appears tailored to fit the refugee policy. The fact that the U.S. government considers ICP an adequate response in the Haitian context is testimony to ^ biased perspective on human rights. ^ l>
% V*
The most obvious shortcomings in ICP, as applied in Haiti, are the following: ?c. C.
1. There is no protection component. A number of cases have been documented of Haitians^hp have been persecuted at different stages of the process, including while awaiting a decision, after conditional approval and after being denied asylum. Risks are exacerbated by inordinately long delays in processing all but the most exceptional cases.
2. There are built-in characteristics, stemming from the U.S. government's incorrect assessment of the refugee crisis, which lead to limited access to the reasonably expedited treatment an asylum seeker logically needs and deserves. All applicants who are not "high-profile" or deemed to be in imminent danger will not even have an initial interview until six or more months after approaching the program. This includes people who would be able to meet the burden of proof for asylum. Priority (vetting) determinations based solely on the contents of a written questionnaire do not consitute a fair hearing under the circumstances.
3. There is evidence of inconsistency in adjudication, unfair application of the standard for asylum and questionable credibility determinations. Cases reviewed showed that past persecution is nearly always a prerequisite for approval. In several cases reviewed, a denial of asylum was only overturned when the applicant was brutalized in the interim. Even among cases where persecution has already occurred, asylum has been denied.
4. Those potential asylum seekers who do not feel that they can safely avail themselves of the program are left with no option. Haitian human rights groups and NGOs feel that this is the case for a significant number of victims of persecution.
5. Haitians interdicted on the high seas and returned are subject to detention under a 1980 decree prohibiting the organization of illegal departures from the country. The existence of this law blurs the distinction between illegal departure and refugee flight. The presence of ICP does not alter the fact that forcibly returning Haitians interdicted on the high seas, puts them at serious risk of both prosecution and persecution.
The Clinton Administration's efforts toward achieving a political solution in Haiti can be favorably contrasted to his predecessor's inaction. Nevertheless, this progress is diminished by the continuation and promotion of a refugee policy that is inhumane and illegal and ultimately calls into question the U.S. government's commitment to human rights and a democratic regime in Haiti. It would be a mistake to
September 1993
2
AW/NCHR/JRS
assume that progress in the restoration of constitutional government signals an end to repression, and hence to the needs of asylum seekers. It is imperative that this policy be replaced with an approach to Haitian refugees which incorporates basic refugee protections.
ICP has been unfairly used as an excuse for forcibly repatriating Haitians. A broader solution to the Haitian refugee crisis which respects the basic principles of non-refoulement and temporary refuge is called for. ICP could appropriately serve as part of such a response.
Finally, the treatment meted out to Haitians has furthered a global trend toward curtailing the rights of asylum seekers and closing borders in the face of victims of persecution. The Haitian experience flags some of the dangers inherent to attempts to address refugee migration through abbreviated procedures and summary return.
I. INTRODUCTION
The September 30, 1991 military coup that exiled President Jean-Bertrand Aristide after only eight months in office, submerged Haiti under a tidal wave of repression and despair. The military fury unleashed against the broad popular sectors that brought Aristide to power has left hundreds, perhaps thousands, dead and made many thousands more the targets of various forms of brutal persecution.1 A direct result of this widespread destruction of Haitian society has been forced migration on a massive scale. Human rights groups estimate that the number of people internally displaced or in hiding since die coup is in the hundreds of thousands.2 Tens of thousands more took to the high seas, thereby exercising their internationally recognized right to leave their country and seek asylum.3
In the wake of the coup, the Bush Administration was faced with two closely interrelated problems:
1 The Inter-American Commission on Human Rights reported in an August 27, 1993 press release, that 1,500 people had been killed since the coup and 300,000 driven into hiding. Haitian human rights groups estimates are even higher. See generally Americas Watch and National Coalition for Hainan Refugees, Silencing a People (New York, AW and NCHR, 1993.) See also, Department of State Country Reports on Human Rights Practices for 1992 (Government Printing Office, Washington, D.C., 1993), Haiti discussion at pp. 421-425, and reports and press releases of the UN/OAS International Civilian Mission, March August, 1993.
2 The term "in hiding" (marronage), commonly used in post-coup Haiti, refers to a range of survival measures taken by individuals who have been persecuted or fear persecution. Being in hiding often involves constant movement, prolonged displacement and inability to work or to be united with family members. Its many manifestations include not sleeping at home at night, leaving town entirely, frequent moving from place to place or remaining confined indoors at a location deemed safe by friends or other helpers. It is often a progressive or fluid state and the causal fear and insecurity are compounded by economic hardship and personal isolation.
3 Article 13,2 of the Universal Declaration of Human Rights states, "Everyone has the right to leave any country, including his own, and to return to his country." Article 14, 1 states, "Everyone has the right to seek and to enjoy in other countries asylum from persecution." Article 12,2 of the International Covenant on Civil and Political Rights states, "Everyone shall be free to leave any country including his own." Article 22 of the American Convention on Human Rights guarantees the right to leave any country and further guarantees the right to "seek and be granted asylum in a foreign territory..." (pgph. 6) and the right of non-refoulement (Pgph. 8). Article 33 of the 1951 Convention Relating."
AW/NCHR/JRS
3
September 1993
what to do about the political explosion in Haiti, and what to do about its human fallout. President Bush's response defied all logic. He reacted in a lukewarm manner to the critical fact of President Aristide's ouster while exerting considerable effort to keep the refugees off U.S. shores.
A longstanding U.S. policy of discrimination against Haitian refugees is the platform upon which the management of this extraordinary human crisis is based. So it comes as no surprise that precisely when military repression reached a new high, tolerated and even promoted by the de facto government, the quality of U.S. treatment of Haitian refugees reached a new low. Indeed, both the Bush and Clinton administrations have gone to great lengths to turn the meaning and intent of international and U.S. refugee law upside down in order to restrict to the fullest extent possible the entrance of Haitian refugees.
During his campaign, President Clinton promised to do what his predecessor had not: contribute to the return of democratic government in Haiti and discontinue what he denounced to be an illegal and dangerous policy of forced repatriation. Even prior to his inauguration, President Clinton began to take more forceful steps toward achieving the reinstatement of the constitutional government of Haiti. In July of this year an accord was signed by President Aristide and General Raoul Cedras creating the framework for a political settlement.4
Meanwhile, on the refugee question, President Clinton not only continued the policy of forced return, he strengthened it by surrounding the island with some twenty U.S. Coast Guard cutters and Navy vessels ordered to interdict and return any Haitian leaving the island for the United States. Shortly after his election, his administration appeared before the Supreme Court to argue, in Sales v Haitian Centers Council, that the principle of non-refoulement did not apply to Haitian refugees on the high seas, thereby sacrificing the most fundamental principle of refugee protection in order to salvage that same policy.5 The decision in Sales v. HCC was a serious blow to the internationally recognized rule of non-refoulement and formally strips the U.S. of the moral authority it once exercised in the defense of asylum seekers the world over.
Responding to the Supreme Court decision, the United Nations High Commissioner for Refugees (UNHCR) stated that, "This decision is contrary to the views of UNHCR's Executive Committee that refugees should not be refused entry to a country where they are seeking asylum, and that asylum seekers
rescued at sea should always be admitted, at least on a temporary basis.....[The] UNHCR considers the
Court's decision a setback to modern international refugee law which has been developing for more than forty years....It renders the work of the Office of the High Commissioner in its global refugee protection
1 The Governors Island Accord was signed on July 3, 1993 and provides a general framework for reinstatement of constitutional government. It requires President Aristide to name a Prime Minister who will be confirmed by a reconstituted Parliament. Steps are then to be taken for lifting of international sanctions, the retirement of army commander General Raoul Cedras, creation of an independent civilian police force and the October 30 return of the President. Robert Malval, the Prime Minister-designate named since by President Aristide, has been approved by the reconstituted Haitian Parliament. The U.N. Security Council suspended the sanctions against Haiti on August 28.
5 The 1951 Convention Relating to the Status of Refugees, Article 33, prohibits States from returning refugees to countries where they may face persecution. In a June 21, 1993 decision in Sales v. Haitian Centers Council, the Supreme Court found that the letter of neither domestic nor international law prohibited the United States from returning Haitian refugees picked up on the high seas, even though, as Judge Stevens wrote in the majority opinion, "such actions may even violate the spirit" of international treaty law.
September 1993
4
AW/NCHR/JRS
role more difficult and sets a very unfortunate example."6
In March 1993, the Inter-American Commission on Human Rights of the Organization of American States (OAS) issued an interim resolution in response to a petition pending before it challenging the U.S. government's Haitian interdiction program. The resolution found the program to be in violation of international law and called for its immediate suspension.
The Haitian constitutional crisis might well be on the difficult path toward resolution in coming months. Nevertheless, it would be erroneous to assume that the signing of papers in New York, or the eventual reinstatement of legitimate government, will automatically result in an end to fear and violence. The serious flaws in the refugee policy will, therefore, continue to have consequences both as gross injustice to Haitians and as a disastrous legal precedent, breathing life into a global trend to narrow and limit the heretofore universally recognized principles protecting those who flee persecution.
The U.S.treatment of Haitian refugees touches on broader legal and moral questions amid the current debate about asylum reform. This policy provides an example of how the distinction between illegal immigration and refugee flight can be lost as countries close their borders to asylum seekers.
II. HISTORY OF U.S. POLICY TOWARD HAITIAN REFUGEES
The U.S. government has been a champion of large groups of asylum seekers around the world, particularly those fleeing what were socialist-bloc countries. For refugees from the former Soviet Union, Vietnam and Cuba, to name a few, the U.S. has upheld the principles of refugee protection, relaxed the standard for qualifying for refugee status and pressed other countries to accept large numbers of refugees by playing a leading role in seeking alternatives and providing resettlement opportunities.7
The purpose of the Refugee Act of 1980 was to bring U.S. law into compliance with international principles and make the granting of asylum and refugee status more uniform. "Until 1980, refugees were defined more by where they came from than by the circumstances and persecution which might have precipitated their flight."8 Conversely, the traditional approach of the U.S. government toward those fleeing regimes that it considers allies has been far more severe and often outright discriminatory, particularly when refugee groups see the U.S. as the logical choice for asylum. The treatment of Salvadorans, Guatemalans and Haitians fleeing brutal military-dominated regimes in the eighties are cases in point. Eventually temporary protected status was granted to Salvadorans, and Guatemalans now have
6 "Office of the High Commissioner concerned by Supreme Court Haitian Decision," June 22, 1993 press release.
7 "On November 21, 1989, the President signed into law...legislation [called the Lautenberg Amendment] establishing categories of refugee applicants. As a consequence, some 58% of all refugee admissions during FY 1990 are being adjudicated according to a standard different from the worldwide standard." Inzunza, "The Refugee Act of 1980 Ten Years After Still the Way to Go," International Journal of Refugee Law, Vol. 2 No. 3, 1990, p. 420. According to Inzunza, in FY 1990, 96% of refugees resettled in the U.S. would be applicants from communist-bloc countries.
8 Ibid, p.416. The Refugee Act of 1980, among other things, regulates overseas processing of refugees (Section 207), asylum adjudication (Section 208) and incorporates the principle of non-refoulement (Section 243).
AW/NCHR/JRS
5
September 1993
somewhat better access to the asylum process.9 The situation for Haitians, however, has only continued to deteriorate.
The U.S. government has long deplored the practice of totalitarian regimes of restricting the exit of their citizens. Nevertheless, it has lauded the Haitian government for measures it has taken since 1980 to restrict the exit of Haitian refugees. What's more, it has recently become the principal enforcer in denying Haitians the right to leave their country and particularly the right to seek asylum.
I For years the U.S. has used a bilateral agreement with the Haitian government as the basis for
the interdiction, screening and repatriation of Haitian asylum seekers. In 1981 the U.S.-Haitian interdiction program was launched based on an exchange of diplomatic letters between the two governments and an executive order from then-President Reagan.10 Under that agreement, Haitian "flag vessels" found in international waters and bound for the U.S. would be interdicted and returned to Haiti. However, the agreement stipulated the U.S. obligation to screen Haitians for claims of persecution, thereby formally recognizing the application of the internationally recognized principle of non-refoulement.
During the next decade, the procedures used to screen boat people and determine refugee status were questioned and attacked by refugee advocates and human rights monitors. From 1981 until the / September 1991 coup, 22,716 Haitians were repatriated, according to State Department figures. A total
of twenty-eight were allowed to enter the U.S. to pursue asylum claims.11 The harsh treatment afforded Haitians in the U.S., who have routinely suffered prolonged detention and asylum-approval rates of less than two percent, has also been a long-standing concern. However, at issue were the procedures, not the principle.
i
In the immediate aftermath of the coup, U.S. cutters initially continued to pick up Haitians on the high seas and screen them onboard for asylum seekers. When this practice was legally challenged as insufficient, screening at the U.S. Naval base at Guantanamo Bay, Cuba commenced. Then the discussion and the lawsuits focused on whether "screened-out" refugees could be forcibly repatriated and whether HIV-positive "screened-in" Haitians could be detained indefinitely at Guantanamo and denied due-process rights enjoyed by other screened-in asylum seekers.12
9 Salvadorans were granted Temporary Protected Status (TPS) through the Immigration Act of 1990, which added section 244A to the Immigration and Nationality Act providing the Attorney General with discretion to grant TPS. Both Salvadoran and Guatemalan refugees also were afforded the opportunity to have their asylum claims reconsidered pursuant to the 1991 District Court decision in American Baptist Churches v. Thornburgh, 760 F.Supp. 796 (N.D.Cal. 1991)
10 Executive Order 12324, September 29, 1981. Sec Bill Frelick, "Haitian Boat Interdiction and Return: First Asylum and First Principles of Refugee Protection," U.S. Committee for Refugees, February 20, 1993, p. 6.
11 L. Guttentag and L. Daugaard, "United States Treatment of Haitian Refugees: The Domestic Response and International Law," American Civil Liberties Union, International Civil Liberties Report, Vol. 1, No. 2, June 1993, p. 10.
12 Refugees were "screened-in" based on a "credible fear" standard in order to pursue their asylum claims in the U.S. under the higher standard of a "well founded fear of persecution on account of race, religion, nationality, membership in a particular social group or political opinion." "Screened-out" refugees were then returned to Haiti. For a 1992 chronology of U.S. program, policy and legislative decisions affecting refugees and asylum seekers in 1992, see "Refugee Reports," U.S. Committee For Refugees, Vol. XIV, No. I.January 29, 1993, p. 6.
September 1993
6
AW/NCHR/JRS
Ironically, it was not until the September, 1991 coup introduced some of the most brutal repression in Haitian history, that the U.S. decided to do away altogether with any pretense of screening fleeing refugees. On May 24, 1992, the parameters of debate shifted dramatically when then-President Bush issued the "Kennebunkport Order" under which all Haitian boats would be interdicted by the U.S. cutters and their passengers returned directly to Port-au-Prince with no prior screening for asylum seekers.13 With the May 24 order, the Bush Administration abrogated the 1981 bilateral agreement with Haiti. The current policy is based on a unilateral action that lacks the formal consent of the Haitian government.
In this way, the Bush Administration solved the U.S. refugee "problem" through a policy of containment that has curtailed the flight, and the rights, of potential refugees. President Clinton inherited this policy, and, swallowing his pre-election aversions, fine-tuned it by blockading the island.
Interdiction of Haitian refugees:
1981 Sept. 1991: 22,803 Sept. 1991 present: 30,932 May 24, 1992 present: 5.826
Total since 1981: 53,735
III. IN-COUNTRY PROCESSING IN HAITI
A. Background
The United States set up an in-country processing program (ICP) in Port-au-Prince in February 1992 to afford Haitians the option of seeking asylum without first taking to the high seas. At this time refugee screening was still taking place at Guantanamo. Since the May 1992 U.S. presidential order, ICP has been the only recourse for Haitian asylum seekers and has become a palliative for critics of U.S. policy. When he announced the temporary continuation of the Bush interdiction policy, President Clinton added that ICP would be expanded and improved, thereby better justifying forced repatriation.
This novel application of ICP is a first worldwide. In-country processing is part of a broader set of procedures contained in the 1980 Refugee Act and was not intended as a sole means of protection.H Similar programs in Vietnam, Cuba and the former Soviet Union were designed to facilitate the processing of chosen groups of refugees the U.S. was already predisposed to accept based on a concept
13 Executive Order 12,807, Fed. Reg. 23,133, May 24, 1992.
14 As noted in the amicus curie brief filed in Sales v. Haitian Centers Council, Joshua R. Floum (Attorney of Record) et al. on behalf of Senator Edward Kennedy and former Representative Elizabeth Holtzman and other Members of Congress (hereinafter Members of Congress Amicus), "(T)he language, structure and legislative history of the Act, as well as years of executive application of the Act, demonstrate that Congress intended that the Act's three separate but concurrent forms of refugee protection comprise a comprehensive scheme." (p. 5)
AW/NCHR/JRS
7
September 1993
of "presumptive eligibility."15 In Haiti, on the other hand, the program is designed to cut off a mass influx of people the U.S. is predisposed to reject. What's more, it is the first case where ICP has been imposed on asylum seekers as a substitute for the ability to escape and seek safe haven before articulating individual claims.16 In the case of Vietnam, the U.S. played a forceful role in encouraging countries of first asylum to accept boat people temporarily until they could be resettled.17
Furthermore, in other countries where ICP became part of a U.S. strategy for resettling refugees, the period of acute political upheaval was over, human rights problems were chronic and predictable and government policies were solidified. In this context, agreements were reached with the respective governments to facilitate the orderly processing of selected groups of people. In Haiti, political turmoil is at its height and more complicated yet, the U.S. does not even recognize the de facto government, much less enter into agreements with it. These factors effectively remove the safeguards which define the logic and efficiency of ICP in other countries. The driving force behind this plan seems to be the historically unshakable U.S. decision not to become a country of first asylum for Haitian refugees.
B. Operational Structure
By definition, overseas refugee processing depends heavily upon executive discretion, and foreign policy considerations are part of the decision on what groups are considered of special humanitarian interest to the U.S.18 In Haiti the State Department is the principal policy-making bureau behind ICP and directly manages it. It has been responsible for setting up the program, providing services to INS officers and contracting with the International Organization for Migration (IOM) and more recently with two non-governmental organizations.19 A Refugee Coordinator manages the program under the auspices of the U.S. Consulate.
Operationally, the State Department's role encompasses all activities except for specific case adjudication. It is responsible for initial "vetting" or grading applications into priority categories for consideration by INS. It has contracted the IOM in Port-au-Prince to receive applicants, prepare asylum
15 For example, Inzunza writes, "Although the statutory definition of refugee changed in 1980, until August 1988, all Soviet and some Indochinese refugee resettlement applications...were being found eligible for refugee status under what amounted to a presumption of eligibility...." (Inzunza, "The Refugee Act of 1980...," p. 418.)
16 See, for example, Members of Congress Amicus p. 10: "The government's conduct in forcing Haitians back to Haiti and funnelling them through section 207 overseas refugee processing violates the purpose of the Act to make these protections comprehensive and to reaffirm the principle of non-refoulement."
17 "[A] similar in-country procedure for processing refugees was created at the height of the Vietnamese boat exodus. However, those who decided to flee by boat were never turned back because such a program existed. And the United States was vigilant in seeing that other governments would not sumamrily push back the boat people, demanding that they be given temporary asylum in the region. Bill Frelick, "Clinton's Haitian Policy: Same Old Story," St. Louis Post-Dispatch, January 19, 1993.(Reprinted by U.S. Committee for Refugees.)
18 Section 207 of the Immigration and Nationality Act "enumerates several factors that may be considered during the consultative process, including the impact on the "foreign policy interests of the United States." The statute, however, does not identify numerical limits, special humanitarian concern or a foreign policy impact for consideration in section 208 (a) asylum or section 243 (h) withholding decisions..." as cited in Members of Congress Amicus, p. 16.
19 The International Organization for Migration is an intergovernmental organization that implements various programs worldwide for migrants and refugees.
September 1993
8
AW/NCHR/JRS
claims for adjudication and handle all out-processing. More recently, two non-governmental organizations (called Joint Voluntary Agencies, or JVAs), World Relief (WR) and the United States Catholic Conference (USCC), have been contracted to run the newly opened regional centers in Les Cayes and Cap Haitien respectively.20 The U.S. Embassy also serves as the main resource on country conditions, social and political organization and human rights data for the program, providing briefing materials and expert opinions.
The IOM staff of forty includes five caseworkers: three Haitian-Americans and two U.S. citizens of non-Haitian background. Caseworkers must be fluent in English, Creole and French and have a university degree. The other staff are form-fillers to assist with completion of standard INS forms, interpreters and administrative staff.
The INS has assigned an Officer in Charge (OIC) and an Assistant Officer in Charge (AOIC), both with one-year contracts. The eight interviewing officers responsible for adjudication are drawn from a pool of primarily examiners and inspectors who have received a three-week asylum training course and are on sixty-day rotations. A quality assurance team comprising an asylum corps officer and a legal advisor from the INS General Counsel's office are assigned on a thirty-day rotation and are responsible for case review of all decisions. The rest of the staff is administrative.
C. Recent Expansion
A technical team including representatives from the State Department, the INS and the Congress traveled to Haiti last January to make recommendations for improving and expanding the program. These included measures to increase capacity and efficiency and the opening of two regional centers.
After a separate review of the program, the INS installed the quality assurance team described above. Another INS change was to draw on a pool of officers who had been through a three-week asylum law training course. According to the State Department and the INS, all of the recommendations were approved and have been implemented.21
D. Current Functioning
The following is a brief outline of the process itself.
1. The applicant picks up a preliminary questionnaire from IOM, which is filled out and returned. (Questionnaires can also be obtained by requesting one by telephone or mail or by sending a friend.) If an applicant is illiterate or otherwise needs assistance, an IOM "employee can help fill out the form. Unfortunately, this happens in a public and quite crowded reception area in full hearing of others present. Some people hire strangers to fill out the forms for them, while others seek help from family members. The first page of the questionnaire is biographical information. The second page requests information on organizational and political affiliations, government posts held and any arrests or problems with the authorities.
20 In other examples of overseas refugee processing, JVAs work closely with the State Department and the INS to facilitate the orderly resettlement of refugees.
21 Unfortunately, the technical team's report and the follow-up report on the implementation of the recommendations have been classified.
AW/NCHR/JRS
9
September 1993
2. The application is vetted (prioritized) into an A, B or C category for adjudication by the Refugee Coordinator's staff.22 Vetting is carried out based solely on the contents of the questionnaire. A vetting supervisor, who has been with the program since the beginning, reviews all vetting decisions.
"A" cases are described as high-profile, often involving an official of the Aristide government, a member of a targeted profession such as journalists, or a grassroots organization leader. The case is considered extremely urgent, and most involve past persecution.23 These make up about five percent of the total vetted applications. "C" cases, about ten to fifteen percent of the total, are those in which (according to the questionnaire) the applicant has made no claim to asylum. The vast majority, over eighty percent of all cases, are "Bs". In many cases, the applicant has articulated some fear of persecution but the case may need to be developed or is not considered top priority.24
All "A" cases are reviewed by the Refugee Coordinator, who will follow particularly sensitive ones. He will also occasionally glance through "B" and "C" cases. "A" cases are scheduled for an IOM and an INS interview the same day or the following day. Currently, "B" cases are receiving interview dates for between January and March, 1994. "C cases are not scheduled for interviews.
3. At the time of the IOM appointment, the necessary forms are filled out and the applicant is interviewed. The purpose of the interview is to review the questionnaire with the applicant and elicit further information relevant to the application. The caseworker writes up the interview and prepares the file for INS.
4. The same day or the following day, the INS reviews the file, interviews the applicant through an interpreter, and makes a provisional decision. This decision is based upon whether the applicant has met the burden of proof and whether the applicant is considered credible.25 The INS interviewer's notes are incorporated into the file along with the recommended decision. Cases are reviewed by the Assistant Officer in Charge and by the quality assurance team, which assesses whether the facts provided are consistent with the decision, whether a credibility judgement is adequately supported and whether legal issues raised by the case have been correctly resolved. A U.S. Embassy political officer and an ethnic affairs expert on the IOM staff are on site and serve as the principal resources on local conditions. The INS Resource Information Center (RIC) provides country condition information from a variety of
22 The vetting staff is generally composed of part-time contract employees, often relatives of U.S. Embassy personnel.
23 Interview with Refugee Coordinator Luis Moreno, Port-au-Prince, June 14, 1993.
24 The approval rate is thirty-three percent for A cases and five percent for B cases. This means that B cases account for a higher number of actual cases approvals.
25 The standard for asylum under the Refugee Act of 1980 is a "well founded fear of persecution...on account of race, religion, nationality, membership in a particular social group or political opinion." This includes, but is not limited to, past persecution. The adoption of this definition brought the U.S. into compliance with the international definition of refugee.
Regarding credibility, the INS "Basic Law Manual: Asylum" (from the Asylum Branch of the Office of the General Counsel, March 1991) states: "[A]n alien's own testimony may be sufficient, without corroborative evidence, to prove an asylum claim if that testimony is believable, consistent and sufficiently detailed to provide a plausible and coherent account of the basis of the claim." According to the UNHCR, "The applicant's statement must be coherent and plausible and not run counter to generally known facts." Handbook on Procedures and Criteria for Determining Refugee Status, January 1988. p. 48.
September 1993
10
AW/NCHR/JRS
governmental and non-governmental sources, including church, refugee and human rights groups.26
5. Out-processing: All approvals are considered conditional until out-processing has been completed. This includes a medical examination, obtainment of a passport (passports are required by the Haitian authorities in order to leave the country) and securing sponsorship by an individual or organization in the United States. For the passports, fingerprints must be obtained at the police station. Obtaining passports for all individuals included on an application may require getting a birth or marriage certificate for the first time.27
6. Motions to reconsider: If a case is denied, the IOM (or the JVA) receives a form letter indicating the category of the reason for denial. These letters are not case-specific. The applicant is then notified. The denial includes notice of the right to file a motion to reconsider. To file the motion the applicant writes a letter to the District Director of INS in Mexico explaining the reasons why the case should be reexamined. These letters can be translated by the IOM (or the JVA in the regional centers). More recently, a notice that the letter must be in English has been included on the denial letter. In general, the letter must present new information; few cases are overturned based on the premise that the original decision was faulty.28 Approximately twenty motions to reconsider are received daily. The decisions are made in Haiti and signed by the INS Officer in Charge on behalf of the District Director. There is a delay of several months in most cases.
7. Regional centers: A regional ICP center opened in Les Cayes on April 26, 1993. It is run by World Relief under contract to the State Department.29 Like IOM in Port-au-Prince, World Reliefs mandate is to prepare cases for INS adjudication. Their expatriate staff is composed of a director and a deputy director. Four form-fillers, an accountant, a receptionist and four security guards have been hired locally. The centers are set up to prepare forty cases per week for INS adjudication. As currently designed, a team of two INS officers will spend two weeks per month in each regional center. An unfortunate feature of the Les Cayes center is its location just one block from the army garrison, where potential applicants are often held and beaten.
There are certain variations to the procedure in the regional offices. For example, an applicant in Les Cayes has the questionnaire vetted and forms filled out on the same day. Vetting is done by the JVA director. As of late June, "B" cases were being scheduled for interviews for sometime in July. "Cs"
26 A U.S. Embassy political officer in charge of refugee and migration affairs (and deputy refugee coordinator) has travelled extensively in Haiti following up on repatriates. To date, over 4,000 have been interviewed. See also, News From Ameridas Watch and National Coalition for Haitian Refugees, "Half the Story: The Skewed U.S. Monitoring of Repatriated Haitian Refugees," June 30, 1992. The human rights liaison, born in Haiti, is an IOM employee who works closely with the Refugee Coordinator and the INS. He is responsible for contacts with local organizations and handles off-site interviews. He is often consulted on sensitive cases as the resident expert on Haitian matters. He also informs ICP personnel through translation and summary of local press.
27 Marriage certificates cost about fifty gourde or US$4.17. The birth certificates vary between fifteen and sixty gourde or $1.25 to 5.00. The required photographs cost seventy-five gourde or $6.25 a set. The minimum and standard wage for a factory worker wage is fifteen G/day or $1.25. (Based on an exchange rate of twelve gourde = U.S. S 1.00.) IOM, if asked, will defray some of the cost. World Relief said they would help pay if asked, but do not tell applicants about this service.
28 This has occurred, however, particularly when an NGO has gotten involved.
29 The center in Cap Haitien opened in May and is run by the United States Catholic Conference.
AW/NCHR/JRS
11
September 1993
were not being scheduled. The director of World Relief told AW and NCHR that mechanisms were in place to transport an urgent case to Port-au-Prince, although no such case had yet occurred.
Two INS officers are scheduled to visit each regional center every two weeks. During those visits they hold interviews for up to 140 applicants. These files are taken back to Port-au-Prince for quality assurance and final adjudication. A decision is communicated to Les Cayes, at which time out-processing is begun for those approved. Medical examinations are completed locally. Fingerprinting, passport obtainment and sponsorship are handled through IOM in Port-au-Prince. Most often the approved applicant waits in Les Cayes for all of this to be completed. Few can afford to stay in Port-au-Prince for that length of time. World Relief says that they pay expenses if asked but do not volunteer such assistance.
In other countries where ICP is used, non-governmental organizations with experience in refugee processing and resettlement have worked closely with the State Department and the INS to prepare and process refugee claims. World Relief and USCC have only recently become involved in ICP in Haiti, taking charge of the two regional centers opened in April and May of this year. According to a USCC official, JVAs are experts on refugee issues and can use that knowledge to help people through die process.30 However, a State Department official said that the JVA role is to provide "a service to the State Department, not to act as advocates."31
Both World Relief and USCC say that as long as ICP is a reality in Haid, their pardcipadon can have a positive effect in the efficient and fair processing of Haitian refugees. However, diey share the broader NGO perspective that not even a new and improved ICP is a substitute for the right to seek safe haven. Fr. Rick Ryscavage, Executive Director of the Catholic Bishops' Office of Migration and Refugee Services, recently stated that "[T]he processing center is no substitute for justice either within Haiti, or in the treatment of refugees who try to flee Haiti."32
Available Data on ICP Caseload June 1, 1992 July 30, 1993
Dates Cases vetted (#) Cases adjudi-cated (#) Cases approved (#) Approval rate (%)* (%)** Cases entered U.S. (#)
Jun 1 -Jul 1, 1992 1337 109 12 11 0.9 2
Jul 1 -Aug 3, 1992 898 394 26 6.6 2.9 7
30 Interview with Shep Lowman, Washington D.C., June 9, 1993.
31 Interview with Ken Foster, Refugee Program, State Department, Washington D.C., June 9, 1993.
32 U.S. Catholic Conference, "Church Agency Disappointed at Supreme Court Ruling Upholding Administration's Decision to Return Haitian Refugees," press release, June 22, 1993.
September 1993
12
AW/NCHR/JRS
Dates Cases vetted (#) Cases adjudi-cated (#) Cases approved (#) Approval rate (%)* (%)** Cases entered U.S. (#)
Aug 3 -Sep 1, 1992 727 575 39 6.7 5.3 1
Sep 1 -Oct 2, 1992 666 557 27 4.8 4.0 11
Oct 2 -Oct 30, 1992 423 331 7 2.1 1.7 0
Oct 30-Nov 27, 1992 436 216 12 5.5 2.8 20
Nov 27 -Jan 1, 1993 318 223 26 11.6 8.2 12
Jan 1 -Jan 31, 1993 320 166 16 9.6 5.0 16
Jan 31-Mar 28, 1993 (2 months) 4210 650 99 15.2 2.4 n/a
Mar 28-Apr 30, 1993 4315 1173 26 2.2 0.6 111
Apr 30-May 27, 1993 1318 596 39 6.5 3.0 50
May 27-Jul 2, 1993 2168 855 91 10.6 4.2 61
Jul 2 -Jul 30, 1993 2259 850 35 4.1 1.5 69
Total # cases 9,578 6755 458 6.8 4.8 368
Total # persons 34,171 7,947 1,243 15.7 3.7 937
* Figure represents percentage of adjudicated cases. ** Figure represents percentage of vetted cases.
Source: Compiled from cumulative figures from the State Department. The methodology employed, as well as all findings, should be transparent and open.
AW/NCHR/JRS
13
September 1993
IV. A CRITICAL ASSESSMENT OF THE ICP PROGRAM
Since its inception, ICP in Haiti has come under severe criticism from human rights groups and refugee advocates. In prior reports on Haiti, AW and NCHR have pointed out many inadequacies of the policy in general and of the ICP program specifically.53 Nevertheless, it is of particular concern that the recent expansion and streamlining of the program under the Clinton Administration has led U.S. officials to tout it as providing "complete coverage" and to see it as a measure which mitigates and justifies the policy of forced return.3'*
The authors recognize the serious efforts made in recent months by individuals involved in the program to make it more efficient and "user-friendly." It does appear that the program has improved in several areas since the technical team visit in January. These include:
1. Expedited processing of Priority A cases: Exceptionally urgent cases can now be turned around in approximately two weeks including the out-processing.
2. Quality assurance: By using quality assurance officers, including some with prior experience in Guantanamo, adjudication decisions are being reviewed systematically by a General Counsel's office attorney and a trained asylum officer.
3. Use of interviewing officers who have attended a three-week asylum training program.
4. Training of IOM staff: Attempts have been made to address the complicated problem of staff/applicant interaction and assure quality and standardization of interview write-ups.
5. The recent opening of two regional centers and the use of JVAs to run those centers.
Nevertheless, these improvements have done little to ameliorate a number of basic shortcomings. These are primarily a result of conceptual inconsistencies, which stem from substituting ICP for traditional self-help remedies such as the ability to flee.
A. The Central Role and Biased View of the State Department
The ICP program is based on the State Department's premise that the number of genuine asylum seekers is actually quite small. A State Department official involved in setting up the program voiced what seems to be the common belief that "most Haitians are economic migrants; it diminishes our program worldwide if we accept economic migrants."35 Furthermore, as stated above, the reason that ICP became the antidote for the Haitian refugee problem in the first place was a desire to keep the numbers admitted to die U.S. to a minimum.
33 See generally, Motion for Leave to File Brief Amicus Curiae and Brief of Human Right Watch, Amicus Curiae, in Support of Respondents, McNary v. Haitian Centers Council (later changed to Sales v. Haitian Centers Council), October term, 1992 and AW and NCHR, "Half the Story," New York, June 30, 1992.
34 Interview with Ken Foster. Assistant Attorney General Webster Hubbell is quoted saying, "Interdicted boat migrants who fear political persecution will be afforded meaningful opportunity for refugee processing in Haiti." (Editorial, "Gone Under a Second Time," Miami Herald, June 22, 1993.)
35 Interview with Ken Foster.
September 1993
14
AW/NCHR/JRS
Ceiling determinations are limits on refugee admittance, made by the Executive branch. They are often made independently of specific country conditions and do not lend themselves to responding to crises. The ceiling for Latin America for fiscal year 1993 was 3,500, of which 500 were allocated to Haiti. This decision was made in August 1992, in the midst of widespread human rights abuses and three months after the Kennebunkport Order made ICP the only option available for Haitians.56
Furthermore, refugees outside the United States in general have far fewer due-process rights than asylum seekers who have made it to U.S. shores, and admission is much more discretionary. Although U.S. refugee law, in contrast to international refugee law, does include the concept of a refugee still in his or her own country, there is an increased sense that any approvals are tantamount to altruism. In refugee processing the officer makes a final decision, there is no judicial or administrative review and the applicant bears a greater burden of proof.
U.S. Embassy personnel or IOM contract employees are the principal resources for IOM and INS interviewers.37 The State Department official interviewed warned that one should not "take people's statements at face value. Past reports such as those put out by the American Immigration Lawyers Association, the Lawyers Committee for Human Rights, Amnesty International etc. contain lots of hearsay. We investigate the cases."38 The fact that this view is being conveyed within the program, certainly undermines the value of having non-governmental human rights material made available to ICP staff. For example, an asylum officer recently assigned to the quality assurance team told AW and NCHR in Haiti that at least some INS personnel consider reports from human rights NGOs and the United Nations/Organization of American States International Civilian Mission (UN/OAS Mission) totally unreliable. 39
Furthermore, the State Department view of the human rights situation in Haiti seems to vary depending on who is asking. The most recent State Department report on country conditions in Haiti stated:
Haitians suffered frequent human rights abuses throughout 1992 including extrajudicial killings by security forces, disappearances, beatings and other mistreatment of detainees and prisoners, arbitrary arrest and detention, and executive interference with the judicial process....40
However, a May 7, 1993 State Department advisory opinion in the case of a Haitian popular-movement activist applying for asylum in the U.S. gave quite a different analysis of the situation:
During 1992, the level of political violence has been considerably reduced.. ..Despite Haiti's violent
36 There is no ceiling for asylum seekers in the U.S. The ceiling for overseas refugee admissions from Haiti for fiscal year 1993 was 500. Although that number has been surpassed and 1,000 unallocated slots were assigned to
Haiti, the fact remains that a ceiling is in place affecting the number of Haitians who will eventually be admitted. j
|
37 A review of asylum claims in the U.S. by Harvard University's National Asylum Study Project shows a heavy reliance by INS asylum officers on State Department resources, according to the Study Coordinator.
58 Interview with Ken Foster.
39 The officer, T.J. Mills, was later suspended from the program. j
40 Department of State, Country Reports (for 1992), p. 421.
AW/NCHR/JRS
15
September 1993
reputation, it is possible for many people to find safe residence in another part of the country....We do not believe the fact that an ordinary citizen is known to support or to have supported President Aristide by itself puts that person at particular risk of mistreatment or abuse.
Under the heading "False and Exaggerated Claims by Previous Returnees," the opinion goes on
to say:
...[Investigations made by U.S. Embassy officers there indicate that many of the reports made by asylum applicants of arrests, killings and intimidation are exaggerated, unconformable or false....41
This view suggests a bias against Haitian asylum seekers by implying that if some have lied, then many probably lie.
In contrast, the June 3, 1993 report by the UN/OAS Mission stated as follows:
The most serious and numerous human rights violations...involved arbitrary detentions, systematic beatings and torture perpetrated by members of the armed forces or persons operating at their instigation or with their tolerance. The Mission has also been informed of cases of arbitrary executions and deaths following torture inflicted while in detention.
As indicated below, these violations of the right to life and integrity and security of person are intended primarily to restrict or prohibit the exercise of the freedoms of opinion and expression, assembly and peaceful association. Unfortunately [the report] provides only a partial picture of the extent to which human rights violations in Haiti are widespread and systematic.42
More recently, in an August 11, 1993 press release, the UN/OAS Mission
expresses its grave preoccupation at the numerous violations of human rights in Haiti. In particular, the Mission condemns the arbitrary executions and suspicious deaths which have reached alarming levels in the area of Port-au-Prince, where 36 cases have been identified since July 1st.
The targets of these grave human rights violations are members of popular organizations and neighborhood associations, but also simple citizens who had the misfortune to find themselves in the path of the killers.
....Attacks on freedom of association and expression continue, as well as violations against personal security and physical integrity.43
The U.S. Embassy's political officer in charge of human rights was reluctant to talk on the record to AW and NCHR about human rights issues. However, she painted a picture of random, undirected
41 According to the Harvard National Asylum Study Project, this kind of opinion is typical of Haitian cases.
42 As of May 1993, the UN/OAS Mission had 141 international staff members of which eighty-six were deployed in regional teams around the country and twenty were in training.
43 As translated by the Washington Office on Lad America.
September 1993
16
AW/NCHR/JRS
violence and general lawlessness merely tolerated from above, as opposed to the targeted, patterned and strategic repression, which includes a sense of chaos and lawlessness, that is reported by both local and international human rights groups.44
B. Political Isolation Weakens the Program
The ICP program is isolated from organizations that could strengthen it by serving as resources. The U.S. policy of forcibly returning Haitian refugees is widely considered to be discriminatory and ultimately in violation of principles of international law. As a centerpiece of this policy, the program has had little contact with the UN/OAS Mission, the UNHCR or local human rights groups which are the real experts on local conditions. A UN/OAS Mission official said, "The Embassy had, until recently, not sought out contact with the Mission. Contact has been minimal."45 While some private human rights groups assist individuals applying to the program on an ad hoc basis, they do not encourage it. Furthermore, they distrust the program's motives and are quick to point out its inadequacies.
C. No Safe Haven Component is Available
The most obvious weakness of the ICP program is that there is no safe haven component for asylum seekers. This means that they do not enjoy even the temporary protections and security to which asylum seekers are entitled under international law.46 The State Department official interviewed told the authors, "We don't provide safe haven....So far it hasn't been an issue because people can call, send letters, access a church group."47 Nevertheless, ICP applicants have been persecuted while awaiting final resolution of their cases. The Refugee Coordinator stated, "No cases tie in harassment, beatings or killings to the refugee program."48 However, that distinction is quickly blurred, since applicants with genuine claims apply to the program precisely because they are at risk.
The authors were able to document several cases of persecution during early June 1993, involving ICP applicants.49
One case reported confidentially occurred some time during the first two weeks of June. It involved a young man who had filled out a preliminary questionnaire to apply for asylum, but never made it back to his interview. When he left the ICP locale he was arrested and taken to a Port-au-Prince police station. He was kicked and beaten. Someone who knew him helped him get released after at least one
44 Interview with Ellen Cosgrove, U.S. Embassy, Port-au-Prince, June 16, 1993.
45 Interview, Port-au-Prince, July 1993.
46 For example, the UNHCR states that in cases of mass influx, temporary refuge should always be provided. See "Conclusions of the International Protection of Refugees" adopted by the Executive Committee of the UNHCR Programme, Office of the UNHCR (Geneva: 1980), p. 49.
47 Interview with Ken Foster.
48 Interview with Louis Moreno, Port-au-Prince, June 14, 1993.
49 Real names are not used in the following testimonies except where stipulated, in order to protect the sensitive situations of our informants. In some cases, specific dates and places have been eliminated for the same reason. All interviews were carried out in Port-au-Prince during the week of June 13-20, 1993.
AW/NCHR/JRS
17
September 1993
day and night in prison/
In Les Cayes, the problem is magnified by the small-town, everyone-knows-everyone atmosphere:
"Claude" is an Aristide supporter and activist. He was president of an election bureau during the 1990 presidential elections, and he collaborates with grass-roots organizations. He volunteers with the local Institute for Social Welfare and Research doing AIDS education. He has a long history of problems with the local authorities, particularly with one government delegate, which he says began due to his work during the 1990 elections. He was first arrested in August 1992 and briefly detained. On November 27, 1992, he was harassed, threatened and chased by the same delegate and two armed men in civilian clothes. A few days later, on December 1, he was detained again and jailed for six days for being Lavalas.51 On December 31, the delegate threatened him with arrest in the street. When passersby protested, he was left alone. On January 6, 1993, the delegate arrested him, and he was taken to the police station. He was threatened with death, accused of being Lavalas, anti-army and a thief. On January 7 his captors decided to make a formal complaint on charges of theft, criminality and morally assaulting the authorities. He was imprisoned at the Les Cayes military headquarters. The public prosecutor ordered him released after six days under "provisional liberty" status. He stopped living in town and lived hiding from then on. On April 27 the delegate saw him again and said "It's you; you're under arrest." He jumped in a taxi and went to the office of the UN/OAS Mission. The World Relief office for ICP had opened that same day in Les Cayes. He went there to apply and was given a questionnaire. He was interviewed on May 4 and received notice of conditional approval on May 21. On June 1 he was arrested by the military at the request of the same government delegate, who said he was going to have him shot. He was released on June 4 after U.S. Embassy intervention. As of June 20, he was still in Les Cayes waiting for out-processing to be completed. He asked the AW and NCHR to intervene to expedite his case. He said he was afraid and living in hiding.52
"Jean" is a thirty-eight-year-old carpenter and furniture maker from Les Cayes. He has been a member of a number of local popular organizations, among them the Assemblee Populaire Nationale and the Union for Change. Prior to the coup he had been arrested and tortured in 1988 under General Henri Namphy's regime. He has been tracked and harassed by the army since the coup because he was a known activist and because he filed a complaint against the official responsible for his torture in 1988. His most recent problems have been with a local government delegate. On several occasions in December 1992 and January 1993 he was threatened and harassed by the delegate. Beginning in January, police and soldiers began arriving at his house. At that time, he moved to another neighborhood, only visiting his home in the daytime. He knows that military auxiliaries known as attache's frequently come to his house at night. After receiving encouragement from a friend, he decided to apply for political asylum. He was hesitant to go since the office was located just up the street from the military headquarters,
50 Interview with a Hainan source close to the ICP program on the condition of confidentiality, Port-au-Prince, June 17, 1993. Hereinafter referred to as a confidential Haitian source.
51 Lavalas is the Creole word meaning "landslide"; as used colloquially, it refers to the broad-based popular movement that elected President Aristide.
52 Interview, Les Cayes, June 19, 1993. Americas Watch and NCHR expressed concern about this case to the Refugee Coordinator and World Relief. The delay was due to the feet that the required passport had not yet been issued.
September 1993
18
AW/NCHR/JRS
but his friend explained how to check out the area and then go in. He applied on May 20, was interviewed by World Relief on June 3 and was scheduled for an INS interview on July 1. Two weeks before that interview, at about 7:30 p.m. on June 18, two soldiers in civilian dress came to his house just as he was arriving. He went inside, and they told him to come out and talk to them. He responded that he was in his own house. They yelled that he was Lavalas and he responded, "Yes I am, and I have a right to be." They told him that they were going to find a way to finish him off. Among other things, they said that when his "Papa Aristide" came back they were going to leave a lot of people "on the ground." They left saying they were coming back with the police. He immediately called the UN/OAS Mission, and two representatives went to his house. The men did not come back, but his wife reported that the same two men had been to the house on two occasions earlier that day and seemed to be waiting for him to show up. The next day, he told AW and NCHR that his wife was packing up the house, now too afraid to continue living there herself.53
If a conditionally approved individual is found to be HIV-positive, the question of protection becomes even more serious. These applicants must file a waiver which is granted at the discretion of the Attorney General, in order to be allowed admission into the United States.54 The added aggravation with ICP is that the person must wait, like a sitting duck, in Haiti, even though he or she has been officially recognized as having a well-founded fear of persecution (or indeed of having suffered persecution). According to IOM, several waivers had been filed in 1993 but were still pending as of June. However, in September, the INS office in Washington reported being unaware of any waivers pending.
AW and NCHR are greatly concerned about one particular case. The applicant was kidnapped at gunpoint and detained for several days at an unknown site, tortured and found dumped on the street days later. His application for political asylum was conditionally approved rapidly, given the gravity of his situation. He was then found to be HIV-positive. In April, he applied for a waiver through the ICP program. Five months later, in September, it was discovered that his application had never left Port-au-Prince due to an administrative delay over a form. AW and NCHR brought the case to the attention of the ICP staff. Meanwhile, the conditionally-approved applicant and his family remain in Haiti at serious personal risk.
D. Operational Deficiencies
By nature and by design, the number and type of people receiving the reasonably expedited processing that asylum seekers require are drastically reduced, and the fair and consistent adjudication of claims is sabotaged. There are examples of this at every stage of the process.
The system is overloaded. This is perhaps unavoidable, given the desperate need of so many Haitians and die fact that all avenues of non-immigrant entry to the U.S. are closed to most people. Those who wish and need to leave for a variety of reasons try the program. This "magnet effect" can impede genuine asylum seekers from receiving a fair hearing and many might be getting lost in the crowd. One
53 Interview, Les Cayes, June 19, 1993.
54 According to experts at the Centers for Disease Control, all refugee applicants are screened for HIV and other diseases such as tuberculosis. HIV-positive approved asylum applicants must obtain a waiver, and these can delay an inordinately long time. An applicant must show, among other things, that his or her medical expenses will be covered at no cost to the government.
AW/NCHR/JRS
19
September 1993
international refugee expert said, "It becomes seen as an immigration office, which limits refugee access."55 Large numbers of economically motivated applicants may also contribute to the perception, held by some U.S. officials, that most Haitians are economic migrants. The opinion of the Haitian source close to the program was that "INS's first impression of people is that they are garbage, beggars. They are seen as economic refugees from the start."56 The quality assurance officer recently expelled from die program reported that a prevalent view among INS personnel is that most applicants are lying.
The use of local Haitian staff is problematic. Early criticism of the program focused on the use of Haitian staff in all stages of processing. All of the IOM reception, interpreter and form-filling staff are Haitian. (Three caseworkers are Haitian-Americans.) Problems such as disrespectful treatment of applicants from a different social class and political perspective have been reported. The confidential Haitian source told AW and NCHR of an applicant who apparendy recognized one interpreter as having been involved in killings in his home town. "Haitians have learned not to trust Haitians," said this source, "and Haitians have learned not to talk politics."57
The State Department and IOM recognize and have made efforts to overcome this problem through training, including a recent weekend seminar which provided pointers on how to interview an applicant and sensitivity training. Nevertheless, this dynamic continues to make it difficult to create an atmosphere of trust necessary to ensure fairness in access and adjudication of claims.58
The vetting process is inadequate. Screening based solely on a written questionnaire, without guidance about the process and without the opportunity to see or speak with a U.S. official, is blatandy unfair given die nature of the information involved and the characteristics of Haitian culture.59 A State Department official interviewed felt that "Haitians are very open people." On the contrary, the confidential Haitian source interviewed said, "You really have to dig to get information from a Haitian. The burden of proof is heavily on the applicant." If an applicant needs help in filling out the application, an IOM staff person may assist. However, the NCHR observed this taking place in the waiting area, with no privacy, and conversations can be easily overheard. Some applicants pay someone to fill out the form for them. In those cases they may not even know what has ended up on the application. Thus, some asylum seekers may be unable or unwilling to articulate their case adequately, before being vetted into a B category that will mean an impossibly long wait or the C category which is tantamount to being ineligible to continue die process.
The B category itself is used as a catch-all between cases which are urgent or high-profile and cases where no claim to asylum is apparent from the questionnaire. Logically, the well-founded fear standard also falls between those two extremes, since the A category requirements are much higher than
55 Interview, Port-au-Prince, June 15, 1993. The official spoke on the condition that he not be identified.
56 Confidential Haitian source, interview in Port-au-Prince, June 17, 1993 (See note 49).
57 Ibid.
58 JVAs are concerned that in the regional centers this problem may be even more acute because of the smalltown dynamics. For example, USCC wanted to use expatriate staff but found the cost prohibitive.
59 For example, Burmese refugee applications are vetted only after an interview carried out by a JVA
September 1993
20
AW/NCHR/JRS
the asylum standard.60 For example, according to the Refugee Coordinator, "Most A cases are past persecution."61 While it is reasonable to assign priority to urgent or high-profile cases for expedited treatment, a large number of others with potentially solid, albeit less dramatic, claims end up being deferred for an inordinately long time. Assuming that these applicants can wait for six months before they even see a U.S. official makes a mockery of their situation. The fact that delays are common to the program in other countries, far from a justification, is further proof of ICP's inadequacy in the context of Haiti.62 The following case is illustrative:
Rodrigue Normil is a thirty-three-year-old artist and activist who was arrested on January 20, 1992 in Grande Goave, his hometown. He believes that the reason for arrest was his brother's involvement in an organization accused by the army of involvement in the September 30, 1991 burning of an army post there. His brother was in hiding, and Normil was arrested in his place. He was severely beaten during his time in prison. He was released after twenty-two days without any legal process having taken place. He went into hiding, eventually moving to Port-au-Prince. On May 6, 1993 he approached IOM to apply for political asylum. He was given an initial interview date of December 3, 1993, a typical time lapse for a B case. On June 4, 1993 he decided to try to return to Grand Goave. He arrived there and was on his way to his house when three men in civilian dress stopped him and told him to hand over his weapon. He said he didn't carry a weapon. Two of the men were armed. They took the letter from IOM and said they were going to send it to Port-au-Prince police chief Michel Francois so that he would know that Roland was trying to leave the country after having burned down the military post. He was forced into their pickup, blindfolded and beaten, then taken to a cell where he spent three days. On the third day he was taken somewhere else, where he was held for eight days, constantly blindfolded. He was given a piece of bread with sugar on it once a day. Once he was brought a drink which turned out to be urine. After eight days he was taken out and abandoned, still blindfolded. He still has health problems including pain in his ears and the chest where he was beaten many times.63
It must also be noted that both IOM and INS staff processing a particular applicant know the vetting category from the start, as it is prominently featured in the file. Furthermore, in C cases the State Department is, in practice, making a final decision, based on a written questionnaire. Consequently, a considerable amount of screening is taking place before a face-to-face interview of any sort and before an INS official steps into the picture.
. Inconsistency in Adjudication
The two key elements of asylum adjudication are the correct application of the standard and a credibility determination. The creation in 1990 of a special INS asylum corps responsible for adjudication is tribute to the difficulty involved in the fair and equitable processing of claims. The difficult question in the Haitian context becomes, What is the sieve through which you sift thousands of people with potentially worthy claims?
60 This is also the case with the lower, credible-fear standard which was used in Guantanamo to "screen in" Haitian refugees to the U.S. to pursue their asylum claims.
61 Interview with Luis Moreno.
62 In general, see Inzunza, "Refugee Act of 1980."
63 Interview with NCHR, June 22, 1993, summarized from interview in French.
AW/NCHR/JRS
21
September 1993
Applying the standard for asylum and determining credibility. The human rights director for the UN/OAS Civil Mission said, "Obviously there is a huge number of people in fear of persecutionpeople are living in hiding at different levels."*4 An INS quality assurance official interviewed said, "Everyone has a well-founded fear maybe not on account of [the reasons stipulated in the asylum regulations]. It is very troublesome to get at people who fit; there are many people at risk. If a person is just scared but nothing has happened to them, that's the first cut. We look for persistence in the persecution, someone who has had a problem over time." The confidential Haitian source interviewed said, "There is a category of people who use ICP as a way out, but I think real cases are being bypassed. If you don't have proof, you most likely will be denied. It sometimes seems set up to make even good cases have a hard time getting asylum."
In Haiti, this challenge is magnified considerably by the following factors:
1. Interviewing officers are not drawn from the specially trained asylum corps and often have no prior experience with interviewing techniques, asylum law and case adjudication.
2. Interviewing officers are on sixty-day rotations, during which time they are under strong pressure to complete at least eight cases per day including interviews with interpreters and case writeups, among other tasks.
3. The above serve to limit officers' ability to familiarize themselves with country conditions and Haitian culture and to research individual cases.
In a study of adjudication of asylum claims in the U.S., the Harvard Law School National Asylum Project found a high degree of variance from officer to officer. Furthermore, in half of the asylum decisions reviewed, errors in law or analysis were detected. This is among a well-trained asylum corps whose sole task is asylum adjudication. In Haiti, where no asylum corps officers are directly involved in adjudication, inconsistency is a logical result.65
An important concern voiced by several sources close to the program was the difficulty of making a credibility determination under the current circumstances. One INS official estimated that a negative credibility assessment could account for perhaps up to thirty to thirty-five percent of denials.66 A government official close to the program said, "Credibility is so much harder than principles. My biggest concern is that someone will tell a true story and will be found to lack credibility. How much you know the system in Haiti is key to a credibility determination."67 The confidential Haitian source interviewed, as well as others close to the program, generally felt that it was difficult for INS officers to acquire the local expertise necessary to assess credibility and apply the standards fairly.
64 Interview with Ian Martin, Port-au-Prince, June 15, 1993.
65 Telephone interview with Sarah Ignatius, Study Coordinator, National Asylum Project, September 16, 1993. According to Ignatius, asylum corps officers in the U.S. have a goal of adjudicating twelve cases per week, spending an average of three hours on each case. In Haiti, in addition to eight cases daily, INS officers must handle motions to reconsider, unscheduled cases such as "walk-ins," as well as other administrative tasks.
66 The authors were unable to obtain data on specifics of the Haitian caseload from either INS or the State Department.
67 Interview with a U.S. government official on condition of confidentiality, Washington D.C., June 8, 1993.
September 1993
22
AW/NCHR/JRS
Quality assurance mechanisms. Quality assurance officers (a legal advisor from the General Counsel's office and an asylum corps officer) are on only thirty-day tours of duty, and their role is primarily limited to case review. They do not play a role in training, nor do they routinely participate in interviews. In addition, quality assurance personnel report that they have not been well received by the core INS staff in Haiti.68
In an internal INS memorandum to the District Director in Mexico City dated April 20, 1993, obtained by AW and NCHR, the Officer in Charge in Port-au-Prince requested that the legal advisor and asylum officer of the quality assurance team be removed. "The presence of a 'legal advisor' here reporting to the General Counsel independent of operations reporting undermines my authority and disrupts the traditional chain of command." He added, "We do not at this time need the quality control officer....since my Assistant OlC.is now here and performing quality control....[Rjeplacing [the previous officer] was entirely unnecessary and never discussed with either you or me."
The following excerpt from the memorandum raises serious concerns regarding the emphasis on production over quality and the attitude of INS officers toward Haitian applicants:
The on-site presence of a legal advisor places a hardship upon the interviewing officers in that they see the legal advisor as reviewing their work, looking for completeness, thoroughness, and in-depth questioning, while the Officer-in-Charge is pressing for production. Traditionally, refugee processing teams work hard all day and let off steam after work by gathering for a beer and laughing and joking about cases interviewed during the day. I might add that this is also good training. The seriousness of die asylum training, coupled with the watchdog style of certain Headquarters personnel, coupled with the on-site (even after hours) presence of a General Counsel representative has combined to hold such activity to a minimum.
In the same memo, the Officer in Charge emphasized the importance of the production requirement in refugee processing and indicated that the asylum training course required for all interviewing officers was good, but "focused too heavily on asylum and for the most part ignored refugee processing." The officers "have die perception that (1) the cases are so difficult and the quality requirements so strong that more than five cases per day per officer is impossible; (2) that every question, answer, hesitation, and body gesture must be thoroughly documented in a written decision; and (3) that failure to sufficiently document a written decision in accordance with the quality requirements would subject the interviewing officer to dire consequences of one kind or another."
More recendy, TJ. Mills, a political asylum officer assigned to the quality assurance team in August, was suspended from the program after less than a week. He told AW and NCHR in Haiti that the reason for his suspension was that he questioned how case decisions were being made. Mills was very concerned about the basis upon which credibility determinations were being taken and believes that people truly fearing persecution were being denied. In a telling case he reviewed, a negative credibility determination was based on an applicant's use of the term Ton Ton MacoiUes, since this paramilitary structure had been previously abolished. Mills said that he reviewed 120 cases during his short stay in Haiti, of which only two had been recommended for approval. He discussed several cases with an INS supervisory officer who, in his view, was uniformly hostile to his questions regarding some decisions. According to the INS Refugee Asylum and Parole Division, Mills's tour of duty was curtailed because it
68 Several U.S. officials privately confirmed the existence of strong tensions between quality assurance personnel and the INS adjudicating staff. The continuation of quality assurance is currently under evaluation by the INS.
AW/NCHR/JRS
23
September 1993
was discovered he lacked prior experience in case review at Guantanamo. The INS is investigating the matter.
There is a prevalent sense that the standard actually being applied is closer to the A category vetting requirements than the "well-founded fear" standard. Human rights groups reported that most applications eventually approved are either high-profile cases or victims of past persecution that is documented and often has been publicized. A staff person of the Centre Oecumnique des Droits Humains, a Haitian human rights group that provides assistance to victims of repression, said that sometimes people are persecuted but can't prove it. "This person would be arrested again or go into hiding. We can't help them because ICP won't accept them anyway."*9 A spokesperson of the Plateforme Haitienne des Droits Humains, a national coalition of nine human rights groups, expressed similar concerns. "Many people with real problems of persecution have been refused. We wonder if the real objective is to accept refugees. It seems they are accepting only those most close to the Aristide government or most vocal in denouncing the violence."70
Motions to reconsider do not constitute adequate administrative review of claims. They are lengthy letters written by the applicant, outlining the entire case. Letters of denial of asylum include notice that the applicant may file such a letter. However, according to the text, "The request to have a case reconsidered or a file reopened must be written (doit etre ecrit) in English or accompanied by an English translation." This is a patently unreasonable expectation in Haiti. The NCHR has received numerous requests to help translate letters, most of which they cannot accept.
The following two cases provide a stark illustration of the concerns outlined herein. They involve local popular organization leaders, both musicians, who fought to get political asylum. The first was successful after his case was widely publicized, and is now in the U.S. The other was not so lucky, and was killed weeks after his latest attempt to receive political asylum through ICP.
Ferleau Norde\ a twenty-seven-year-old musician and activist from the southern town of Dame-Marie, applied for political asylum in November when he fled his home after being arrested and tortured. His case was still pending in February 1993 when a story about him appeared in Tht New York Times. A short time later, his case was denied. It took a second article in The New York Times in March, as well as the intervention of organizations including the NCHR, before his motion to reconsider was finally granted.71
Andrei Fortune was a twenty-nine-year-old local popular movement leader in Lascahobas. Like other activists, he had a history of harassment and problems, particularly because he was very outspoken at a time when freedom of expression is routinely punished. According to testimony he gave to the UN/OAS Mission prior to his death, he was arrested in May 1992 in Port-au-Prince with a companion, Roland David. Both were beaten, and the latter later died of his injuries. Since then, Fortune had lived in different levels of hiding. He applied for political asylum in Port-au-Prince in June 1992, one month after his detention. He was denied in July. In August he filed a motion to reconsider, which was also
69 Interview in Port-au-Prince, June 16, 1993.
70 Interview, Port-au-Prince, June 17, 1993. The Human Rights Platform researches and documents human rights abuses and disseminates reports nationally and internationally.
71 Howard French, "In Hiding in Haiti, Dissident Despairs of U.S. Help," The New York Times, February 1, 1993 and French, "Haitian Dissident Loses Plea for U.S. Refugee Visa," TTie New York Times, March 4,1993.
September 1993
24
AW/NCHR/JRS
denied. In June 1993, three local activists were briefly arrested following an incident where the bridge to Lascahobas had been closed off and tires burned by demonstrators. Others, including Fortune, went into hiding. Fortune's mother feared he was on an army list to be arrested, and contacted the UN/OAS Mission, whose staff interviewed him at that time. In July, Fortune re-applied for political asylum. Soon after, his case was denied for the third time. The refugee coordinator for ICP told AW and NCHR that Fortune's asylum claims had been denied based on a negative credibility determination due to false statements he had reportedly made in his applications. Weeks later, on August 16, 1993, several Haitian soldiers went to his home and one of them shot Fortune in the back, killing him. Since then, his colleagues have gone into deeper hiding, and several have fled the area. The UN/OAS Mission considers the case a clear example of a politically motivated killing, and issued a press communique condemning this violation of the right to life, demanding a thorough investigation by the authorities.72
The difficulty in determining how cases are adjudicated and decisions are made is due in part to an unwillingness to open the process to evaluation. For example, neither the INS nor the State Department was able to make available specific data on applicants which would be helpful in evaluating the process. Information requested included demographic and geographical information on applicants as well as detailed monthly case statistics for each vetting category, reasons for denial, etc.75
Experienced non-governmental organizations find the program in Haiti less accessible than similar programs they are familiar with elsewhere. For example, they say that access to denied cases is important in order effectively to assist Haitians in applying through the program. According to several NGOs involved in refugee processing elsewhere, such access is (informally) standard procedure in Southeast Asia. An official at World Relief reported that they had requested, and were denied, permission to observe INS interviews, a privilege they had exercised while processing refugees from the former Soviet Union in Rome.74 While such access seems to depend on the discretion of the INS Officer in Charge, the NGO view is that, given the difficulty and sensitivity of refugee processing in Haiti, they should have at least as much access as they enjoy elsewhere. So far this has not been the case.
Out-processing creates delays and risks. Out-processing of approved applicants prior to departure can involve delays as long as several months. According to the IOM director in Port-au-Prince, exceptionally urgent cases take one week to adjudicate and one week for out-processing. The majority of approved cases, however, are delayed four or more weeks, and AW and NCHR know of cases that have delayed months. The departure of "Claude," whose case is described above, had been delayed nearly a month when he was interviewed in Les Cayes in June, even though in the interim he had been arrested, threatened with death, and released. According to World Relief, he was still waiting for his passport.
The fact that approved applicants must be fingerprinted and obtain a passport from the de facto authorities in order to leave the country is of great concern. At the very least, obtaining a passport can delay out-processing considerably. It is tantamount to requiring permission from your persecutor in order to flee the country. Two sources familiar with the program reported that since January, when a Haitian army deserter who had been approved for asylum was arrested at the airport by the de facto authorities,
72 UN/OAS Civilian Mission, Communique de Presse, Ref:/CP/93/30.
75 It is not even clear exacdy what information on applicants is being kept. However, it seems logical that a detailed data base would serve as an excellent source of information on patterns of repression in Haiti.
74 This is ironic given that the filling-out of the initial questionnaire, which includes sensitive questions, is done in a crowded room.
AW/NCHR/JRS
25
September 1993
the U.S. Embassy has been "clearing" at least some cases with the authorities prior to departure. AW and NCHR were unable to get official confirmation of this. But the January incident raises important concerns related to ICP in the Haitian context.
According to testimony given to NCHR, Coracelin Williams deserted the Haitian armed forces and went into hiding in December, 1991 after helping two people escape instead of arresting them as ordered. He fled the country on May 24, 1992 and was returned by the United States. The second time, he made it to Cuba but voluntarily repatriated in January, 1993. He then applied for refugee status through ICP and was approved. On the day of his departure, he was arrested at the airport and held for three days. He was released after the U.S. government intervened in his behalf, and is currently residing in the U.S.
In a March 15, 1993 press release regarding the case, the Haitian Armed Forces claimed that Williams had been court-martialed in absentia for desertion.
No coordination has been made with the Haitian authorities, who were totally ignored. After the unfortunate and lamentable incident, the Immigration and Naturalization Service representative in Haiti, Mr. Sam Martin, visited General Headquarters, at which time he was informed of the potential implications....The Armed Forces of Haiti reaffirm to the American administration their will to cooperate within the parameters set down by our constitution, the laws of Haiti. The Armed Forces of Haiti hope representatives of the INS in Haiti will be more vigilant in their handling certain files in the future.75
F. Representative Cases
During the course of the present research, AW and NCHR interviewed a number of people whose cases indicate chronic deficiencies in asylum claims adjudication as described herein, including an improper emphasis on past persecution. In some cases, an initial denial was overturned later, when the applicants' worst (well-founded) fears became a reality.
In one case assisted by the NCHR, a denied applicant filed a motion to reconsider in August. The applicant was a member of a popular organization and had suffered a series of arrests and beatings since the September 1991 coup. The following is a summary of information submitted in the applicant's motion:
In October 1991 his house was shot at and his father and brother were savagely beaten and imprisoned by the army. The next day the applicant was arrested with two friends. He was badly beaten and imprisoned for nine days. He has lived in hiding in the mountains since January 1992. In April 1992 he was arrested again with other activists and forced to paint over slogans and graffiti throughout the town. He tried to leave Haiti in a small fishing boat in May, and was forced to return because of rough seas. Since he could not safely go home, he went to Port-au-Prince. He applied for asylum in June 1992. Less than two weeks later, one of his brothers was arrested and imprisoned for one day. In August, the applicant's request for asylum was denied. He tried to return home but left soon after when he found out the army was looking for him. He was arrested in September in a small town where he was staying. He was again brutally beaten, tied, threatened with death. He was held for eighteen days and upon his release immediately left for Port-au-Prince. In October he wrote a letter of reconsideration to the ICP program. In April 1993 he received a refusal letter.
In addition to the four-page letter in his second, August motion to reconsider, the applicant
75 Press Release (in English), Grand Quartier General, Forces Armees d'Haiti, Port-au-Prince, Haiti, March 15, 1993.
September 1993
26
AW/NCHR/JRS
submitted a newspaper article and a letter from the Ministry of Justice attesting to one of his arrests. This motion is still pending.
"Pierre" is a twenty-three-year-old student from Port-au-Prince. He is a member of a student organization known as the the Zafe Elev Lekol (ZEL). He was forced to leave his family home, stop attending school, live in hiding in another town since December 1992. On November 27, 1992, at 2:00 a.m., agents of the Service de Investigation et Antigang (known as Antigang ~ a division of the Haitian militarized police) came to his home. He was there with his friend, "Rene\" the latter's cousin, and other family members. Twenty men, some in uniform and others in civilian dress, surrounded the house and knocked on the door. They called Pierre by name, and he responded that he didn't have any reason to talk to the police. He finally went outside with the two friends, at which time the police started to beat him. They asked about Rene\ not knowing that he was one of the two there with him. They asked him about other ZEL members while beating him. They searched every room in the house, finding photographs of Aristide and spray paint for graffiti. They tied his hands behind his back and did the same with his two friends and took them to the Antigang headquarters. During his incarceration, Pierre was beaten, taken to a known dumping ground to scare him, threatened with death and interrogated about ZEL activities organizing student demonstrations and so forth. Both Pierre and Rene' spent a total of twenty-five days in prison, including seventeen in the Antigang headquarters and eight days in the National Penitentiary. They were released December 22 based on a December 18 court order. The charges, according to Pierre, were disturbing the peace, being Lavalas fanatics and criminal association. Both young men applied for asylum.76 Pierre applied on March 2, 1993 and received a letter of denial on March 25. Point 10 on his denial letter was checked off, stipulating that "your testimony concerning the facts, actions and circumstances is inconsistent on important points and is deemed to be inadmissable." Rend's application was approved. Pierre was amazed because he said they had mistreated him even more than his friend. He wondered if it could have been because of the interpreter. In April, he wrote a letter asking for his case to be reconsidered. As of late June, no response had been received.
Hilton Etienne (his real name) is a thirty-eight-year-old man from Hinche, the capital of the Centre Department. He is a leader of the Ti Legliz or Christian base community, and a member of the Catholic Church's Justice and Peace Commission, a neighborhood committee and a literacy program. He first fled his home on October 6, 1991 when the military came to his house. They were looking for local organizers of a September 30 street demonstration. One entered and warned him that he was going to be arrested. He fled on foot and by bus to Port-au-Prince. In the ensuing months he tried to return to Hinche on two .occasions. Both times he was forced to return to Port-au-Prince by continued army harassment. In April 1992, soldiers came to his house, forced the door and pointed a gun at his wife's head and stomach, asking where her husband was. They searched the house, stealing some money and a VCR. They returned later and arrested her. A neighbor was sent to tell him, and he left immediately. His wife spent one day in prison, at one point fainting from the stress. In early May 1992, he went to the U.S. Consulate to apply for asylum after hearing about ICP on the radio. He had several interviews during June and was denied asylum in early July. Between October and December he tried to go back to Hinche twice. Both times, the army came around his house looking for him, the second time searching his house and removing literacy materials. In February 1993, his house was searched by the military again. In March, he asked his wife and mother to talk to the UN/OAS
76 "Pierre" showed the authors a certificate of his ZEL membership and two newspaper articles attesting to the arrest and imprisonment of both young men.
AW/NCHR/JRS
27
September 1993
Mission about his case. He then returned to Hinche but never slept at home. On April 28 at 2:00 a.m. soldiers forced the door of his house and arrested him. They hit him in the face, and he lost two teeth. He was tied up with a rope and hit around the head and eyes. His ears were boxed. He was taken to the army headquarters where the Djak was performed on him, and he was beaten 200 times with a stick.77 En route to another location, he was beaten in the street, his left arm was broken and he was made to count as he was hit another hundred times. He briefly lost consciousness and was dragged by his captors before being locked up for one night. He was released on April 29, and went back to Port-au-Prince to a hospital. A priest from Hinche came and warned him to go to a private hospital because they were looking for him. His wife joined him and they lived in hiding in Port-au-Prince. (His case was publicized in several periodicals and on radio; consequently he cannot walk in the streets because he fears he would be recognized.) The NCHR assisted him in writing a letter of reconsideration which was approved in June. Two weeks later he was still waiting for the results of his medical examination and for his passport. His wife must stay behind to care for grandparents and children but she wants to move to Port-au-Prince.78
Clor Josephat (his real name), thirty-seven, is from Perodin, Petite Riviere de l'Artibonite. He has a wife and four children. He is a farmer by profession and a member of the Rassamblement Paisant Perodin (RPP), a local farmer organization. He is currently living in hiding in Port-au-Prince. On November 12, 1991 at about 3:00 p.m. soldiers came to his house to arrest him. They burned his house down. They told him it was because he was a member of the RPP and Lavalas. They asked him how many people he had burned, if [President] Aristide was going to come back and about the activities of Lavalas. Fifteen other members of his organization were arrested at the same time. He was not beaten in prison because his family paid money to the army so he wouldn't be mistreated. They paid 360 Haitian gourdes (about US$30.00) so he wouldn't be beaten and another 360 Haitian gourdes for his release. (Those that didn't have money to give were beaten and didn't get released as quickly.) Upon his release he went to live in his mother's house. On January 16, 1992, he was on his farm when his wife warned him that the army had gone to his mother's house looking for him. He left the area immediately. He found out later that the local section chief (a local rural military authority) had asked the military to arrest all members of his organization. On March 14 he went to Port-au-Prince. He slept in front of St. Joseph's Church and finally worked out room and board with someone in exchange for work. In October he tried to return to his home and was arrested en route. He was beaten, and his ears were boxed. The soldiers walked on him, and he fainted. (He has a medical certificate dated October 27, 1992 which certifies that he was the victim of police brutality.) He went back to Port-au-Prince at the end of November, to a local NGO, the Centre Oecumdnique des Droits Humains, and they provided medical referral and financial assistance. He went to NCHR for assistance and then applied for asylum through ICP. After five visits over a three-day period, he received a denial letter. He returned to the NCHR office with the letter. The staff assisted him in writing a motion to reconsider. As of July he was still waiting for a response.
According to Josephat, eleven other members of the RPP have also asked for asylum. One was approved and has left Haiti. Other applications are still pending. Some who were arrested after him and
77 The "Djak" is a common form of torture involving tying the victim's arms behind the knees and beating the victim repeatedly. It is also common to make the victim count the blows; a miscount results in more beating.
78 See also, Harold Mass, "Some repatriated Haitian refugees subjected to arbitrary arrest, torture," The Miami Herald, June 18, 1993.
September 1993
28
AW/NCHR/JRS
filed later were given January 1994 appointments. Americas Watch and NCHR know of at least one other RPP member who has received asylum after having similar problems and is due to leave Haiti soon.
Fritzion Orius (his real name), is a thirty-year-old journalist from Petite Riviere de l'Artibonite. He is married and has one child. He was a radio correspondent with Radio Haiti-Inter in his area and in Port-au-Prince. He was also a member of the local elections bureau during the September 1991 elections that brought President Aristide to power. After the coup, journalists like himself were seen as "outlaws." When a group of armed thugs went to his house and threatened him, he left town. Since then he has been moving around, living in hiding. While staying with a friend in a town south of the capital he applied for asylum at the U.S. Consulate in May 1992. He was given an appointment for the first week of June. He had several interviews, including form-filling sessions, during eight hours spent there. In a week he had heard nothing and called, only to be told to go pick up his denial letter dated June 15, 1992. He continued to live in hiding. He tried to go back to his home in February but didn't sleep at his own house. A group of police and attaches, two of whom were in uniform, went to his house looking for him. They found and savagely beat his twenty-year-old brother with machetes, sticks and clubs. (His brother had worked with the Information Ministry under the Aristide government.) He went back into hiding, living from town to town, unable to work. During this time, several of his fellow journalists had also had problems. Three had been arrested, severely beaten and spent a month in prison.79 Friends warned him to leave, believing him to be in danger. He made several more attempts to go home and to visit his family, and each time he was threatened and harassed. After an incident where attaches attempted to detain him during one such visit, he contacted the Committee to Protect Journalists and decided to reapply for asylum. He filed a letter of reconsideration on Thursday June, 17, 1993 with the help of the NCHR.
He asked Americas Watch and NCHR to intervene in his case with the ICP program, and both organizations expressed their concern about his case directly to the Refugee Coordinator. Orius's motion to reconsider was refused in August.
On Monday, June 28, 1993 Mr. Vesnel Jean-Francois (his real name), a literacy worker and coordinator of a coalition of community organizations of Citd Soleil, was arrested and tortured by the Haitian military after they broke up a demonstration of about one hundred supporters of President Aristide. He was hospitalized in military custody, and released on July 1. Jean-Francois had applied for political asylum through ICP in Port-au-Prince in October 1992. his claim was denied. In March, he sought help from the NCHR to have his claim reconsidered and was again denied.80
In the case of motions to reconsider, past persecution also seems key, as opposed to a "well-founded fear." The following cases are examples:
"Louis," a thirty-four-year-old member of a community association in a Port-au-Prince neighborhood, applied for asylum and was denied in June 1992. In October he filed a motion to reconsider which was rejected on January 12, 1993. On November 25, 1992, during the time his motion was pending, armed men in civilian dress went to his house. He was not home, and they searched the house saying they were looking for weapons. Then on January 31, 1993, little more
79 Journalists have been particularly singled out for repression since the coup. It is one of the professions considered by the State Department to be "at risk" for purposes of placing an applicant in the "A" vetting category.
80 See Pam Constable, "...and the beatings continue," The Boston Globe, July 6, 1993.
AW/NCHR/JRS
29
September 1993
than two weeks after his motion to reconsider was denied, soldiers from the Cit6 Soleil army post arrested him for being "pro-Aristide." He spent six days in prison. He filed yet another motion to reconsider on February 7, 1993 and is still waiting for a response.
The Miami Herald reported as follows:
Seraphin and his brother Caceus tried to flee by boat seven months after the coup, but were sent back. They then applied for refugee status through the U.S. Consulate in Port-au-Prince, as do about 20% of the refugees returned home. They were rejected. Caceus was badly beaten during his most recent arrest ~ his third and grimaced in pain as he lay in his bed after his release. Shortly before his arrest in May, he went to the U.S. refugee application office in Port-au-Prince to ask that his case be reconsidered. Overloaded with applications, employees gave him an appointment. In October.81
G. Some Asylum Seekers Will Not Risk Applying
By definition, no matter how well structured and managed, ICP cannot meet the needs of a significant group of asylum seekers who distrust the program or believe that they would put themselves or their loved ones in danger by approaching it in the current political climate. These may not be the highest-profile people, who by definition live more in the public eye. According to the international refugee expert interviewed, "The program lacks credibility, it is seen with suspicion, it is linked with the U.S. government position in general." Staff of local human rights groups claim the program is viewed with great skepticism by their clientele and the relentlessly targeted popular movement organizations. "People are discouraged and reluctant," according to staff of the Catholic Church's Justice and Peace Commission. "They know they are taking a chance. They see it as a waste of time."82
An INS asylum officer interviewed in Port-au-Prince said, "I wonder whether we're seeing the people with the best claims. We don't see [those in] deepest hiding. They are so afraid they won't come out. We don't know how bad things are." However, the State Department official interviewed told AW and NCHR that he and his colleagues "believe ICP is safer for real refugees....We think we're getting the most vulnerable."83 The confidential Haitian source interviewed personally knew people in hiding who are "afraid to go there, afraid that what they say will haunt them."
The IOM and INS are willing to do off-site interviews in Port-au-Prince for those afraid to approach the program. But according to local human rights groups, this can draw dangerous attention to the applicant as well. The Justice and Peace Commission, an NGO in daily contact with people in hiding and victims of persecution, reported that in May, an IOM official went to Cite" Soleil with a Haitian guide to conduct an off-site interview that their office had helped arrange. That night, Zenglendos (armed thugs) arrived at the house where the interview had taken place, frightening the entire household into hiding. The Justice and Peace staff knew of other individuals in hiding who would not apply.
81 Maass, "Some repatriated Haitian refugees subjected to arbitrary arrest, torture."
82 Interview, Port-au-Prince, June 17, 1993.
83 According to Inzunza, "[U]nfortunately, in most cases, those most in need of this legal remedy those most vulnerable to abuses and with least access to any viable alternatives are least likely to be able to take advantage of it. Recognizing the need for this kind of processing, we must also realize its inherent limitations." ("Refugee Act of 1980," p. 421.)
September 1993
30
AW/NCHR/JRS
Similarly, staff of the Centre Oecumnique des Droits Humains say that in their experience, there are many people who will not contact the program. They know of cases of people who "self-vet," even though they have been persecuted, because they know they don't have enough proof. One staff person told the story of two friends in hiding. "They are afraid to go to IOM, they are afraid of the process," he said. "They are two young people from Miragoane who think they must leave. They are members of the Organization to Defend the Interests of Nipp, a local popular organization. They have colleagues who have been arrested and others who have been killed. People in the area see them as 'Aristide fanatics.' They have been living in hiding since attaches broke up a meeting they were attending in a school and threatened them. They fear they will be arrested if caught."
V. INTERDICTION, FORCED RETURN AND IN-COUNTRY PROCESSING
New procedures have been implemented to assure that asylum seekers forcibly returned by U.S. Coast Guard cutters are smoothly incorporated into the ICP program. U.S. Embassy personnel continue to meet the cutters at the dock to monitor the return. Currently, an INS interpreter, explanatory audio cassettes and preliminary questionnaires are available on board the cutter. The questionnaires are vetted by State Department personnel before disembarkation, and asylum seekers are given an interview date.
On July 17, AW and NCHR observed the forced return of eighty-seven refugees by the U.S.C.G.C. Tahoma. It was anything but smooth. Present at the dock were the Haitian Red Cross, the U.S. Embassy, the UN/OAS Mission and the press. Haitian police and immigration officials swarmed the area.84 U.S. Embassy officials initiated on-board vetting of the questionnaires. One official later told AW and NCHR that the cases were "mostly Bs, no As."
After disembarking, the returnees, including eleven children, were hustled into line by police officials. The Embassy had arranged for a woman in labor to be taken directly to a waiting Haitian Red Cross ambulance instead of having to go through the immigration process. She was questioned by police and immigration officials while waiting in the ambulance. Everyone else was taken to an immigration/police post at the pier for processing. Several of the returnees had their heads covered and hid their faces as journalists snapped pictures. A journalist from the government television station and other national and foreign press interviewed a number of the returnees.
Immigration processing included questioning, fingerprinting the adult males and a meticulous search of all of their possessions. The Haitian officials were brusque and insulting. One official held up a bag of diapers and box of feminine napkins and displayed them laughingly to the spectators before tossing them back in the plastic bag. The whole process was, as one foreign official commented, "intimidation par excellence." The Red Cross handed out yellow cards for food aid at their destinations and provided cash for bus fare.
The following case of a young man forcibly returned from the interdicted boat is an alarming example of individuals who are potentially at risk being forcibly returned to Haiti:
"Jacques," a first-year university student with a current identification card from the Faculty of Applied Sciences in Port-au-Prince, had been living in hiding for months when he decided to try
84 A police officer harassed a Haitian NCHR colleague on several occasions, threatening to have him removed. The officer said that they didn't recognize his organization and that such organizations were "crushing the country, always telling lies about the situation." Similar harassment obliged AWs Haitian interpreter to leave the dock area.
AW/NCHR/JRS
31
September 1993
to leave Haiti. On December 6, 1992 he had been threatened by a gang of armed thugs who said they were after students from his and other universities. He left his house and moved to Leogane, south of Port-au-Prince, but was subjected to continued harassment so that by early June he was too afraid to return to school. He lived in hiding and through friends heard that a boat trip was being organized. Once on the U.S. cutter he filled out a questionnaire. He claims he was told only to complete the first page of biographical information. He did not tell anyone on board about his situation. He showed AW and NCHR a card he was given with a March 1, 1994 appointment date with the IOM.
By the end of the process, twenty-two men had been led away for police questioning. Of these, six were arrested and taken by a uniformed police officer and two individuals in civilian dress in a private pickup to the Immigration and Identification Service, housed in the same building as the infamous Antigang headquarters. Those arrested were Lionel Brice, Micot Brice, Jean Arnold Morice, Wisner Julme, Letoine Joseph and Roland Bernard. The U.S. Embassy was informed that they were under arrest as the alleged trip organizers. Three were released after several hours and the others the following day. None was charged.
On July 5, 1993, the U.S. Coast Guard cutter Baranhof repatriated twenty-six people who were aboard a boat intercepted about thirty-five miles southwest of Griffe du Sud. They were afforded the same treatment described above. Eleven were taken for questioning, and five of these were arrested on charges of "illegal departure" and "organization of a clandestine voyage." They spent two nights imprisoned at the Antigang headquarters before being freed on July 7 by a public prosecutor. They later told the UN/OAS Mission observers that they had not been mistreated but had been interrogated about the trip organizer and the owner of the boat.
As this report was being completed, on September 11, 1993, a Haitian boat carrying forty-six people was interdicted. As the passengers were being transferred to the U.S.C.G.C. Mohawk rough seas caused the Haitian boat to capsize, and nine Haitians drowned. The rest were summarily returned to Port-au-Prince on September 13.
Among the passengers, who departed from the northern city of Cap Haitien, were ten members of a local organization, Komite Katye Lavalas. On September 3, 1993, eighteen people, including these ten, had been arrested. "Sean", twenty-three, and two other victims, told NCHR that they were beaten and incarcerated before escaping at night through a prison window. Ten of those who escaped boarded a boat departing the next morning after the boat owner agreed to help them flee. Six of that group were among the drowned, including Sean's wife.
Prior to disembarking at the Port-au-Prince dock, the returnees filled out ICP preliminary questionnaires. Sean and another returnee were given appointments for the following morning. Six other passengers were arrested and briefly detained, including a fourteen-year-old girl and the sympathetic boat owner who had assisted the ten in their escape.
A 1980 Haitian decree regarding "irregular voyages" stipulates (Art. 3) that:
Any organizer of an irregular voyage destined for abroad, any attempt to make a person undertake a voyage abroad from the national territory without the corresponding legal procedures will be punished by a sentence of six months to three years as dictated by the competent correctional court. In case of a repeat offense, the guilty party will receive the maximum sentence
September 1993
32
AW/NCHR/JRS
and will be fined 10,000 to 50,000 gourdes.85
On January 16, 1993, 107 Haitians who were preparing to leave by boat were arrested by soldiers on a beach near Gonai'ves. The soldiers fired their guns into the air, tied the men together by their wrists and the women by their dresses. They were taken to the Toussaint L'Ouverture military base, where they were detained. According to the military commander interviewed by NCHR, the refugees were held for violating the 1980 decree. Officials in Gonai'ves interpreted the decree as including all those who pay for passage on a boat. Approximately forty-five detainees, mostly women and children, were held for five days and released on January 21. The rest were released on January 22.
The existence of this law makes forcible return of Haitian boat people even more unconscionable under the present circumstances, since it is being enforced by the same regime responsible for innumerable abuses against citizens. Unauthorized departure is a recognized trademark of refugee flight worldwide. In Haiti it is also considered a crime. The U.S. policy of summary return of all Haitians blurs die fundamental disdncdon between a refugee and a criminal, returning Haitians to face detention and possible prosecution. Not only are Haitians denied the possibility of arriving at a safe port; they are returned to a country where the very fact of their exit puts them in danger. According to officials, the U.S. Embassy has taken great pains to follow up on these cases, even for months, until they are resolved. However, the Embassy has no control over who is detained and why, or how they are treated, physically and legally. Picking up Haitians packed into rickety boats may be termed "rescue at sea." Returning them to Haiti, under diese circumstances, clearly cannot.
VII. CONCLUSIONS
The Clinton Administration has contributed to the achievement of an accord which may lead to a setdement of the Haitian political crisis. This is an example of what can be achieved when the U.S. government works cooperatively with the international community. It is a positive initiative that is marred by the blatant mishandling of the refugee crisis.
A fundamental question is whether Haitian refugees are a U.S. "problem" or a regional or international "problem". Logically, many Haitians choose the U.S. as their country of first asylum.86 While this does not, of course, oblige the U.S. to receive them as asylees, there is an international obligation to respect the principle of non-refoulement and provide at least temporary refuge.87 By taking the lead in curtailing the legal options of Haitian asylum seekers, by interdicting and forcibly returning them, the U.S. government has not only made the refugee crisis its problem but has, at the same time, sabotaged the possibility for international participation and support. By continuing the blockade,
85 Decree dated November 17, 1980. By contrast, Article 41-1 of the 1987 Haitian Constitution states that, "No Haitian needs a visa to leave the country or return." The Constitution also states (Art. 19) that, "The State has the binding obligation to guarantee all citizens the right to life, to health and respect for the human person without distinction, in conformity with the Universal Declaration of Human Rights." Translations from the French by Americas Watch. Article 14 of the Universal Declaration expressly provides for the right to leave a country and seek asylum.
86 Thousands of Haitians have also sought refuge in the Dominican Republic, with mixed results.
87 See generally UNHCR, Conclusions on the International Protection of Refugees Adopted by the Executive Committee of the UNHCR Programme (United Nations: New York), regarding large-scale influx and temporary, refuge. (For example, 30th Session, 1979; 31st Session, 1980.)
AW/NCHR/JRS
33
September 1993
President Clinton effectively eliminated the possibility of a regional approach to the crisis, isolating the U.S. from the very entities, nationally and internationally, that could have contributed to a reasonable regional response. The ICP program is both a product and a victim of this isolation.
Using ICP as the justification for refoulement and the only alternative for asylum seekers is ludicrous and severely abridges the rights of Haitians. Policies should fit and uphold laws. By pretending that ICP is an appropriate response to the Haitian crisis, and that the principle of non-refoulement is not applicable, the U.S. has succeeded in turning the intent of international and domestic refugee law upside down to make it fit a discriminatory policy. The results are easily observable in the practice. Contrary to the assertions of administration officials, Haitians worthy of asylum are indeed "slipping through the cracks."
While the ICP program has improved in recent months, it continues to suffer from the inconsistencies stemming from the flawed rationale behind it and the problems inherent in carrying out such a program in the political context of Haiti. Asylum seekers, above all, need protection and the ability to state their claims in a climate of trust, safety and fairness. As the present research indicates, many have not found these basic needs met by ICP. Furthermore, Haitians are punished for doing what anyone would instinctively do when faced with danger flee.
Some Haitians who have availed themselves of ICP have been unfairly and indefinitely put on hold. Some have been rejected until their well-founded fears are borne out. Still others have suffered persecution at different stages of the process. Those who reasonably infer that ICP is not a safe option are left with no option. Furthermore, there are indications that the State Department is taking steps toward winding down the program in anticipation of a political settlement.
INS Commissioner-nominee Doris Meissner recently wrote, "Receiving countries must be attentive to pre-refugee, pre-migration circumstances in sending countries....Thus, migration prevention must become a legitimate objective of international diplomacy and national policy." However, "migration prevention" does not mean establishing blockades and beefing up border patrols to protect U.S. borders from a so-called onslaught of undesirable aliens. It does mean actively engaging in the establishment of lasting solutions to the political crises in countries like Haiti. Contributing to an environment in which citizens are able freely to exercise their rights, including the right to stay, should be a centerpiece of this solution. The U.N. High Commissioner for Refugees, Sadako Ogata, has repeatedly warned that while the right to stay is important, the right to flee must not be forgotten. "[Prevention is not... a substitute for asylum; the right to seek and enjoy asylum, therefore, must continue to be upheld."88
U.S. policy toward Haitians is a case study in the growing tendency worldwide to close the doors on refugees. The harsh consequences of restrictive ICP procedures for some Haitian asylum seekers, suggest that what we can expect if proposed U.S. legislation is passed to curtail the asylum process even further. U.S. policymakers have played on the public's fears of increased immigration and are taking the lead in a worldwide trend toward closing borders and denying asylum to bona fide refugees. Having done that, the U.S. forfeits its ability to encourage other countries to do what's right when faced with large numbers of people fleeing persecution.
The NGO community was prepared to work with the incoming Clinton Administration to establish a fair response to the Haitian refugee crisis. Proposals backed by AW and NCHR and many others were quite conservative and pragmatic. The Administration was asked to do the bare minimum required by
As cited in the World Refugee Survey, U.S. Committee for Refugees (Washington, D.C.:1993), p. 7.
September 1993
34
AW/NCHR/JRS
law and human decency to respond to Haitian asylum seekers: respect the principle of non-refoulement and give them a fair hearing. Instead, the Clinton Administration opted to continue violating the most basic rights of Haitians and to invest considerable resources and energy into further damaging both domestic and international refugee law, affecting asylum seekers the world over.
As stated in the introduction, progress in the political arena, while cause for hope, does not lessen the need for an adequate policy to address the needs of Haitian refugees, now and in the future. The precedents set by the management of the Haitian refugee crisis have ominous consequences for future refugees fleeing massive human rights violations, in Haiti or elsewhere. The blatant manipulation of U.S. and international law, by two administrations, to further this policy leaves open the question of how the U.S. will handle the next refugee emergency.
Haitians have a right to flee persecution and seek safe haven. The U.S. government has played a central role in the refugee crisis, going out of its way, on the high seas, to actively deny safe haven and has called it "rescue." It has further established an in-country processing program that cannot, in and of itself, serve as an adequate response to the needs of Haitian asylum seekers, and has called it "complete coverage."
VII. RECOMMENDATIONS
1. End refoulement policy: On January 14, President-elect Clinton announced the temporary continuation of forcibly returning Haitian refugees, saying, "The practice of returning those who flee Haiti by boat will continue, for the time being, after I become President." This "temporary" measure should be ended, Supreme Court decision notwithstanding, and new procedures should be established, in conjunction with the Aristide administration, for handling asylum seekers outside Haiti.
2. Help to develop alternatives: The U.S. should work closely with the UN, UNHCR, the NGO community and OAS member states to devise an acceptable and safe international response to Haitian asylum seekers in which the U.S. would necessarily play a leading role. In December 1992, a broad-based coalition of NGOs supported a series of measures for protection and processing that, consistent with international law and standards, could immediately be implemented. These included:
ending automatic repatriation;
expanding in-country processing;
increasing the number of refugee slots allocated for Haitians;
opening up a safe-haven enclave in the Caribbean basin;
settling pending litigation; and
providing temporary status to Haitians currently in the United States.
The UNHCR has advocated a broad strategy which includes facilitating a regional response consistent with traditional principles of burden sharing so that no one country shoulders the responsibility. The UNHCR has also promoted a comprehensive plan for rescue at sea, screening and non-refoulement, and full and fair procedures for eligibility determinations.
AW/NCHR/JRS
35
September 1993
3. Conduct an independent review of the ICP program: Given the inconsistencies in asylum claims management and adjudication cited in this report, an independent review of the ICP program including the roles of both the State Department and the INS is called for.
The U.S. Congress should request an investigation by the General Accounting Office with the objectives of investigating the Stated Department's approach to and management of the program as well as detecting irregularities in adjudication to ascertain whether Haitian asylum seekers receive a fair and timely hearing through the program.
The Attorney General should appoint an impartial panel of experts to carry out a thorough review of case decisions for the Justice Department The methodology employed, as well as all findings, should be transparent and open to public participation and scrutiny. Haitian asylum seekers who believe they have received an unfair decision, or their representatives, should be included in the review process.
4. Use ICP as part of broader plan: ICP should be continued as an important additional avenue of protection Haitian asylum seekers. As part of a broader plan of action for Haitian refugees, ICP could be strengthened and supported by a more collegial relationship with various local and international NGOs and international institutions such as the UNHCR. Only in this context could specific recommendations regarding ICP operations in Haiti contribute to improving the program.
Acknowledgments
This report was written by Gretta Tovar Siebentritt of Americas Watch (AW) based on research by her and Anne Fuller and Pierre Esperance, both of the National Coalition for Haitian Refugees (NCHR) in Haiti. Mary Pack and Rob McChesney of Jesuit Refugee Service/USA (JRS) contributed to the research in Washington D.C. The report was edited by Cynthia Brown in conjunction with the staffs of AW, NCHR and JRS.
The authors would like to thank the numerous institutions whose staff contributed information for this report, including the State Department and the Immigration and Naturalization Service (INS) in Washington D.C, and the U.S. Embassy, the INS and the International Organization for Migration (IOM) in Port-au-Prince. We are also grateful to many individuals and institutions both in Washington, D.C. and in Haiti, who generously shared their experiences. These include the United Nations High Commissioner for Refugees, United Nations/Organization of American States International Civilian Mission, U.S. Catholic Conference, World Relief, U.S. Committee for Refugees, Centre Oecum6nique des Droits Humains, Plateforme Haitienne des Droits Humains, the Centre de Recherche et de Formation Economique et Sociale pour le Developpement (CRESFED), the Catholic Church Justice and Peace Commission, Franz Guillite and several individuals who spoke with us confidentially. We particularly thank the Haitian asylum seekers named and unnamed in this report, who agreed to speak with us in Port-au-Prince and Les Cayes. Finally, we thank Mews Joseph and C. for greatly facilitating our research in Haiti.
September 1993
36
AW/NCHR/JRS
* *
Americas Watch was established in 1981 to monitor and promote the observance of internationally recognized human riglits. Americas Watch is one of five regional divisions of Human Rights Watch. The Chair of Americas Watch is Peter D. Bell; Vice Cliairs, Stephen L. Kass and Marina Pinto Kaufman; Executive Director, Juan E. Mendez.
Human Rights Watch is composed offive regional divisionsAfrica Watch, Americas Watch, Asia Watch, Helsinki Watch and Middle East Watch and the Fund for Free Expression. Its Chair is Robert L. Bernstein; Vice Chair, Adrian W. DeWind; Acting Executive Director, Kenneth Roth; Washington Director, Holly J. Burkhalter; California Director, Ellen Lutz; Press Director, Susan Osnos; Counsel, Jemera Rone.
National Coalition for Haitian Refugees, established in 1982, is composed offorty-seven legal, human rig/Us, civil rigfus, church, labor and Haitian community organizations working together to seek justice for Haitian refugees in the United States and to monitor and promote human rights in Haiti. Its Executive Director is Jocelyn McCalla; Associate Director, Anne Fuller; Research Associate, Ellen Zeisler. In addition to periodic reports on human rights in Haiti, the NCHR publishes a monthly bulletin on human rightsz and refugee affairs. It is available upon request.
Jesuit Refugee Service/USA, located in Washington, D.C., is the central coordinating office in the United States of the international Jesuit Refugee Service. JRW was founded in Rome in 1981 under the auspices of the Society ofJesus. Regionally organized, JRS operates programs for refugees and internally displaced persons in over twentyjive countries in Asia, Africa, Central and North America, Europe and Australia. Major program foci include pastoral care, legal assistance, research, lieallh education and accompaniment.
AW/NCHR/JRS
37
September 1993
RECENT PUBLICATIONS FROM AMERICAS WATCH
Please fill out completely, including subtotal, postage, total enclosed, and your shipping address. Please make checks payable to Human Rights Watch.
Qty Country Title Price Total
_Argentina Police Violence in Argentina, 12/91, 46 pp., 051-0 $ 5.00_
_Bolivia Almost 9 Years: Still no Verdict in Trial on Responsibility, 12/92 3.00_
_Brazil Urban Police Violence in Brazil, 5/93 3.00_
_ Brazil: Prison Massacre in Sao Paulo, 10/92 3.00_
_Chile Chile: Struggle for Truth & Justice, 7/92 3.00_
_Colombia Political Murder and Reform in Colombia, 4/92, 128 pp., 064-2 10.00_
_Cuba 'Perfecting' the System of Control, 2/93 3.00_
_ Tightening the Grip: Human Rights Abuses in Cuba, 2/92 3.00_
_Dom. Rep. A Troubled Year: Haitians in the D.R., 10/92, 54 pp., 082-0 7.00_
_ Dominican Authorities Ban Creole Radio Program, 4/92 3.00_
_El Salvador Accountability & H.R.: Report of the U.N. Commission on Truth, 8/93 3.00_
_ Peace & H.R.: Successes & Shortcomings of ONUSAL, 9/92 3.00_
_ El Mozote Massacre: The Need to Remember, 3/92 3.00_
_ El Salvador's Decade of Terror (Yale Press), 11/91, 240 pp., 04939-0 25.00_
_Guatemala Clandestine Detention, 3/93 3.00_
_ Guatemala: Getting Away with Murder, 8/91, 136 pp., 024-3 10.00_
_Haiti Silencing a People: Destruction of Civil Society, 144 pp., 094-4 10.00_
_ Skewed U.S. Monitoring of Repatriated Refugees, 6/92 3.00_
_Honduras Torture and Murder by Government Forces Persist, 6/91 3.00_
_Jamaica Human Rights in Jamaica, 4/93 3.00_
_Mexico Unceasing Abuses, 9/91, 38 pp., 040-5 5.00_
_Nicaragua Fitful Peace: Human Rights under Chamorro, 7/91, 64 pp., 034-0 7.00_
_Panama Human Rights in Post-Invasion Panama, 4/91 3.00_
_Paraguay Encouraging Victory in Search for Truth & Justice, 10/92 3.00_
_Peru Human Rights One Year after Fujimori's Coup, 4/93, 56 pp., 098-7 7.00_
_ Untold Terror: Violence against Women, 12/92, 76 pp., 093-6 7.00_
_ Civil Society & Democracy under Fire, 8/92 3.00_
_ Peru under Fire (Yale Press), 6/92, 200 pp., 05237-5 23.50_
_Suriname Human Rights Conditions on Eve of Elections, 5/91 3.00_
_U.S. Frontier Injustice: H.R. Abuses Along U.S. Border w/Mexico, 5/93 3.00_
_ Freedom of Expression among Miami's Cubans, 8/92 3.00___
_ Brutality Unchecked: U.S.-Mexico Border Abuses, 6/92, 88 pp., 074-8 7.00_
_HRW Human Rights and U.N. Field Operations, 6/93, 184 pp., 103-7 15.00_
_ The Human Rights Watch Global Report on Prisons, 6/93, 344 pp., 101-0 20.00_
_ Human Rights Watch World Report 1993, 1/93, 424 pp., 05600-1 20.00_
_ Indivisible Human Rights, 9/92, 82 pp., 084-7 7.00_
_ An annual subscription to all the publications from all the divisions of 375.00_
Human Rights Watch is available (includes postage & handling). _AMW An annual subscription to just the Americas Watch newsletters is available 40.00_
(includes postage & handling). An annual subscription to all Americas Watch publications is also 85.00_
available (includes postage & handling).
Subtotal $_
Total Enclosed $_
Ship to (name and address): please print Shipping charges: for the U.S. on orders under $30.00
add 20%; $30.00-5100.00 add 10%; over $100.00 add
_ 5%. For other countries: airmail orders add 50%;
surface mail add 30%.
Phone number
Address orders to Human Rights Watch, Publications Department, 485 Fifth Avenue, New York, NY 10017-6104
8/93 | http://ufdc.ufl.edu/AA00000874/00001 | CC-MAIN-2016-40 | refinedweb | 22,749 | 54.32 |
Test paralelizer
Project description
Parallelizes test executions.
It allows to parallelize the integration/acceptance tests execution in different environments. This way they will took much less time to finish.
And it is based on plugins in order to support different languages or platforms.
ParaTest can be run under any Continuous Integration Server, like Jenkins, TeamCity, Go-CD, Bamboo, etc.
Why Paratest?
Almost all test runners allow you to paralellize the test execution, so… why Paratest?
Well… In some cases test execution cannot be parallelized because of depenencies: database access, legacy code, file creation, etc. Then, you need to create a full workspace whenever you want to test them.
This may be a hard task, and sadly Paratest cannot help there.
But with some scripts to clone an existent workspace, Paratest can divide the tests between any number of workspaces, creating them on demand, and running the tests on them. Resources put the limits.
Another advantage of Paratest is the test order: Paratest remembers the time spent in each test and will reorder them to get the most of your infrastructure.
And finally, Paratest can retry failed tests, in order to avoid unstable tests.
Usage
First of all, you need two things:
- a source. This means to have a source with instructions to create a workspace
- some scripts to setup/teardown the workspaces. This should translate the source into a workspace.
Then, Paratest will call the setup scripts in order to create the workspaces and will parallelize the test run between them.
Current plugins
ParaTest is in an early development stage and it still have no plugins to work. It is just a proof of concept.
Contribute
Plugins
Writting a plugin is quite easy. You can see the paratest-dummy as example. Just two steps are required:
Write the plugin methods
Currently, just one method is required:
def find(path, test_pattern, file_pattern, output_path)
It should return a dict or a generator for tuples.
Register the entrypoint
The second step is to create a pip package with the entrypoint find within the group paratest. This should be done in the setup.py file. Example:
from setuptools import setup, find_packages setup( name='whatever', version='0.0.1', entry_points={ 'paratest': 'find = whatever:find' } )
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/paratest/ | CC-MAIN-2018-26 | refinedweb | 396 | 65.52 |
Setting Up TailwindCSS with React
Tailwindcss is a utility-first CSS framework loaded with classes that help you design almost any design without leaving the HTML. In this article, we are going to learn how to integrate the tailwindcss with React.
Getting Started
Let’s create your react project. We are going to use the create-react-app to setup everything related to React quickly.
npx create-react-app reactailwind
Now go to your project directory
cd reactailwind and install the tailwindcss.
Install Tailwind
Tailwind requires to install its peer-dependencies such as postcss and autoprefixer. If you don’t have installed the
postcss and
autoprefixer to your project then create with the following command.
npm -i -D tailwindcss postcss autoprefixer
Why autoprefixer with Tailwind?
Tailwind does not automatically add vendor prefixes to the CSS it generates, we recommend installing autoprefixer to handle this for you
Error: PostCSS plugin tailwindcss requires PostCSS 8
I face this issue while integrating Tailwind with CRA. To resolve this issue I simply installed the PostCSS module.
npm -i postcss
Generate Tailwind Configuration
We can generate the Tailwind configuration file by the following command.
npx tailwindcss init
The above command will generate a blank configuration file, that you can customize according to project need. Let’s keep this file as it is.
module.exports = { purge: [], darkMode: false, // or 'media' or 'class' theme: { extend: {}, }, variants: { extend: {}, }, plugins: [], }
Set-up PostCSS Configuration
Let’s create a
postcss.config.js file to configure it for tailwind.
touch postcss.config.js
Open the
postcss.config.js file and paste the following codes to it.
const tailwindcss = require("tailwindcss"); module.exports = { plugins: [ tailwindcss("./tailwind.config.js"), require("autoprefixer") ] };
Create Global CSS
Next, create a CSS file and import the base, components and utilities from tailwind. I have created a file
global.css under /src/styles/global.css
@tailwind base; @tailwind components; @tailwind utilities;
Connect Tailwind, PostCSS and React
Now, open the package.json and change the scripts. This is how it looks before editing.
"scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" },
Modify it with the following
"scripts": { "start": "npm run watch:css && react-scripts start", "build": "npm run build:css && react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "build:css": "postcss src/styles/global.css -o src/styles/main.css", "watch:css": "postcss src/styles/global.css -o src/styles/main.css" },
As you noticed, we have created two new commands
watch:css and
build:css that runs the
postcss command. We added the source for the global style and the output path to the
postcss command.
Now open the
App.js file and add the compiled style by postcss.
import './styles/main.css';
We hope this article will help you to learn the setting up tailwindcss with react. If you like this article then please follow us on Facebook and Twitter. | https://webomnizz.com/setting-up-tailwindcss-with-react/ | CC-MAIN-2020-50 | refinedweb | 485 | 50.63 |
Generics allow you to write a class or method that can work with any data type.
Write the specifications for the class or the method, with substitute parameters for data types. When the compiler encounters a constructor for the class or a function call for the method, it generates code to handle the specific data type.
Generics is a technique that enriches your programs in the following ways − namespace.
You can create your own generic interfaces, classes, methods, events, and delegates.
You may create generic classes constrained to enable access to methods on particular data types.
You may get information on the types used in a generic data type at run-time by means of reflection. | https://www.tutorialspoint.com/Generics-in-Chash | CC-MAIN-2022-05 | refinedweb | 116 | 63.29 |
HDFS Architecture
HDFS has a master and slaves architecture in which the master is called the name node and slaves are called data nodes (see Figure 3.1). An HDFS cluster consists of a single name node that manages the file system namespace (or metadata) and controls access to the files by the client applications, and multiple data nodes (in hundreds or thousands) where each data node manages file storage and storage device attached to it.
FIGURE 3.1 How a client reads and writes to and from HDFS.
While storing a file, HDFS internally splits it into one or more blocks (chunks of 64MB, by default, which is configurable and can be changed at cluster level or when each file is created). These blocks are stored in a set of slaves, called data nodes, to ensure that parallel writes or reads can be done even on a single file. Multiple copies of each block are stored per replication factor (which is configurable and can be changed at the cluster level, or at file creation, or even at a later stage for a stored file) for making the platform fault tolerant.
The name node is also responsible for managing file system namespace operations, including opening, closing, and renaming files and directories. The name node records any changes to the file system namespace or its properties. The name node contains information related to the replication factor of a file, along with the map of the blocks of each individual file to data nodes where those blocks exist. Data nodes are responsible for serving read and write requests from the HDFS clients and perform operations such as block creation, deletion, and replication when the name node tells them to. Data nodes store and retrieve blocks when they are told to (by the client applications or by the name node), and they report back to the name node periodically with lists of blocks that they are storing, to keep the name node up to date on the current status.
A client application talks to the name node to get metadata information about the file system. It connects data nodes directly so that they can transfer data back and forth between the client and the data nodes.
The name node and data node are pieces of software called daemons in the Hadoop world. A typical deployment has a dedicated high-end machine that runs only the name node daemon; the other machines in the cluster run one instance of the data node daemon apiece on commodity hardware. Next are some reasons you should run a name node on a high-end machine:
- The name node is a single point of failure. Make sure it has enough processing power and storage capabilities to handle loads. You need a scaled-up machine for a name node.
- The name node keeps metadata related to the file system namespace in memory, for quicker response time. Hence, more memory is needed.
- The name node coordinates with hundreds or thousands of data nodes and serves the requests coming from client applications.
As discussed earlier, HDFS is based on a traditional hierarchical file organization. A user or application can create directories or subdirectories and store files inside. This means that you can create a file, delete a file, rename a file, or move a file from one directory to another.
All this information, along with information related to data nodes and blocks stored in each of the data nodes, is recorded in the file system namespace, called fsimage and stored as a file on the local host OS file system at the name node daemon. This fsimage file is not updated with every addition or removal of a block in the file system. Instead, the name node logs and maintains these add/remove operations in a separate edit log file, which exists as another file on the local host OS file system. Appending updates to a separate edit log achieves faster I/O.
A secondary name node is another daemon. Contrary to its name, the secondary name node is not a standby name node, so it is not meant as a backup in case of name node failure. The primary purpose of the secondary name node is to periodically download the name node fsimage and edit the log file from the name node, create a new fsimage by merging the older fsimage and edit the log file, and upload the new fsimage back to the name node. By periodically merging the namespace fsimage with the edit log, the secondary name node prevents the edit log from becoming too large.
The process of generating a new fsimage from a merge operation is called the Checkpoint process (see Figure 3.2). Usually the secondary name node runs on a separate physical machine than the name node; it also requires plenty of CPU and as much as memory as the name node to perform the Checkpoint operation.
FIGURE 3.2 Checkpoint process.
As you can see in Table 3.1, the core-site.xml configuration file for Hadoop 1.0 contains some configuration settings related to the Checkpoint process. You can change these configuration settings to change the Hadoop behavior. See Table 3.1 for a Checkpoint-related configuration example in Hadoop 1.0.
TABLE 3.1 Checkpoint-Related Configuration in Hadoop 1.0
Table 3.2 shows some configuration settings related to the Checkpoint process that are available in the core-site.xml configuration file for Hadoop 2.0.
TABLE 3.2 Checkpoint-Related Configuration in Hadoop 2.0
With the secondary name node performing this task periodically, the name node can restart relatively faster. Otherwise, the name node would need to do this merge operation when it restarted.
The secondary name node is also responsible for backing up the name node fsimage (a copy of the merged fsimage), which is used if the primary name node fails. However, the state of the secondary name node lags that of the primary, so if the primary name node fails, data loss might occur.
File Split in HDFS
As discussed earlier, HDFS works best with small numbers of very large files for storing large data sets that the applications need. As you can see in Figure 3.3, while storing files, HDFS internally splits a file content into one or more data blocks (chunks of 64MB, by default, which is configurable and can be changed when needed at the cluster instance level for all the file writes or when each specific file is created). These data blocks are stored on a set of slaves called data nodes, to ensure a parallel data read or write.
FIGURE 3.3 File split process when writing to HDFS.
All blocks of a file are the same size except the last block, which can be either the same size or smaller. HDFS stores each file as a sequence of blocks, with each block stored as a separate file in the local file system (such as NTFS).
Cluster-wide block size is controlled by the dfs.blocksize configuration property in the hdfs-site.xml file. The dfs.blocksize configuration property applies for files that are created without a block size specification. This configuration has a default value of 64MB and usually varies from 64MB to 128MB, although many installations now use 128MB. In Hadoop 2.0, the default block is 128MB (see Table 3.3). The block size can continue to grow as transfer speeds grow with new generations of disk drives.
TABLE 3.3 Block Size Configuration
Block Placement and Replication in HDFS
You have already seen that each file is broken in multiple data blocks. Now you can explore how these data blocks get stored. By default, each block of a file is stored three times on three different data nodes: The replication factor configuration property has a default value of 3 (see Table 3.4).
TABLE 3.4 Block Replication Configuration
When a file is created, an application can specify the number of replicas of each block of the file that HDFS must maintain. Multiple copies or replicas of each block makes it fault tolerant: If one copy is not accessible or gets corrupted, the data can be read from the other copy. The number of copies of each block is called the replication factor for a file, and it applies to all blocks of a file.
While writing a file, or even for an already stored file, an application can override the default replication factor configuration and specify another replication factor for that file. For example, the replication factor can be specified at file creation time and can be even changed later, when needed.
The name node has the responsibility of ensuring that the number of copies or replicas of each block is maintained according to the applicable replication factor for each file. If necessary, it instructs the appropriate data nodes to maintain the defined replication factor for each block of a file.
Each data node in the cluster periodically sends a heartbeat signal and a block-report to the name node. When the name node receives the heartbeat signal, it implies that the data node is active and functioning properly. A block-report from a data node contains a list of all blocks on that specific data node.
A typical Hadoop installation spans hundreds or thousands of nodes. A collection of data nodes is placed in rack together for a physical organization, so you effectively have a few dozen racks. For example, imagine that you have 100 nodes in a cluster, and each rack can hold 5 nodes. You then have 20 racks to accommodate all the 100 nodes, each containing 5 nodes.
The simplest block placement solution is to place each copy or replica of a block in a separate rack. Although this ensures that data is not lost even in case of multiple rack failures and delivers an enhanced read operation by utilizing bandwidth from all the racks, it incurs a huge performance penalty when writing data to HDFS because a write operation must transfer blocks to multiple racks. Remember also that communication between data nodes across racks is much more expensive than communication across nodes in a single rack.
The other solution is to put together all the replicas in the different data nodes of a single rack. This scenario improves the write performance, but rack failure would result in total data loss.
To take care of this situation, HDFS has a balanced default block placement policy. Its objective is to have a properly load-balanced, fast-access, fault-tolerance file system:
- The first replica is written to the data node creating the file, to improve the write performance because of the write affinity.
- The second replica is written to another data node within the same rack, to minimize the cross-rack network traffic.
- The third replica is written to a data node in a different rack, ensuring that even if a switch or rack fails, the data is not lost. (This applies only if you have configured your cluster for rack awareness as discussed in the section “Rack Awareness” later in this hour.
You can see in Figure 3.4 that this default block placement policy cuts the cross-rack write traffic. It generally improves write performance without compromising on data reliability or availability, while still maintaining read performance.
FIGURE 3.4 Data block placement on data nodes.
The replication factor is an important configuration to consider. The default replication factor of 3 provides an optimum solution of both write and read performance while also ensuring reliability and availability. However, sometimes you need to change the replication factor configuration property or replication factor setting for a file. For example, you need to change the replication factor configuration to 1 if you have a single-node cluster.
For other cases, consider an example. Suppose you have some large files whose loss would be acceptable (for example, a file contains data older than 5 years, and you often do analysis over the last 5 years of data). Also suppose that you can re-create these files in case of data loss). You can set its replication factor to 1 to minimize the need for storage requirement and, of course, to minimize the time taken to write it.
You can even set the replication factor to 2, which requires double the storage space but ensures availability in case a data node fails (although it might not be helpful in case of a rack failure). You can change the replication factor to 4 or higher, which will eventually improve the performance of the read operation at the cost of a more expensive write operation, and with more storage space requirement to store another copies.
Writing to HDFS
As discussed earlier, when a client or application wants to write a file to HDFS, it reaches out to the name node with details of the file. The name node responds with details based on the actual size of the file, block, and replication configuration. These details from the name node contain the number of blocks of the file, the replication factor, and data nodes where each block will be stored (see Figure 3.5).
FIGURE 3.5 The client talks to the name node for metadata to specify where to place the data blocks.
Based on information received from the name node, the client or application splits the files into multiple blocks and starts sending them to the first data node. Normally, the first replica is written to the data node creating the file, to improve the write performance because of the write affinity.
As you see in Figure 3.6, Block A is transferred to data node 1 along with details of the two other data nodes where this block needs to be stored. When it receives Block A from the client (assuming a replication factor of 3), data node 1 copies the same block to the second data node (in this case, data node 2 of the same rack). This involves a block transfer via the rack switch because both of these data nodes are in the same rack. When it receives Block A from data node 1, data node 2 copies the same block to the third data node (in this case, data node 3 of the another rack). This involves a block transfer via an out-of-rack switch along with a rack switch because both of these data nodes are in separate racks.
FIGURE 3.6 The client sends data blocks to identified data nodes.
When all the instructed data nodes receive a block, each one sends a write confirmation to the name node (see Figure 3.7).
FIGURE 3.7 Data nodes update the name node about receipt of the data blocks.
Finally, the first data node in the flow sends the confirmation of the Block A write to the client (after all the data nodes send confirmation to the name node) (see Figure 3.8).
FIGURE 3.8 The first data node sends an acknowledgment back to the client.
For example, Figure 3.9 shows how data block write state should look after transferring Blocks A, B, and C, based on file system namespace metadata from the name node to the different data nodes of the cluster. This continues for all other blocks of the file.
FIGURE 3.9 All data blocks are placed in a similar way.
HDFS uses several optimization techniques. One is to use client-side caching, by the HDFS client, to improve the performance of the block write operation and to minimize network congestion. The HDFS client transparently caches the file into a temporary local file. When it accumulates data as big as a defined block size, the client reaches out to the name node.
At this time, the name node responds by inserting the filename into the file system hierarchy and allocating data nodes for its storage. The client flushes the block of data from the local temporary file to the closest data node and that data node creates copies of the block to other data nodes to maintain replication factor (as instructed by the name node based on the replication factor of the file).
When all the blocks of a file are transferred to the respective data nodes, the client tells the name node that the file is closed. The name node then commits the file creation operation to a persistent store.
Reading from HDFS
To read a file from the HDFS, the client or application reaches out to the name node with the name of the file and its location. The name node responds with the number of blocks of the file, data nodes where each block has been stored (see Figure 3.10).
FIGURE 3.10 The client talks to the name node to get metadata about the file it wants to read.
Now the client or application reaches out to the data nodes directly (without involving the name node for actual data transfer—data blocks don’t pass through the name node) to read the blocks of the files in parallel, based on information received from the name node. When the client or application receives all the blocks of the file, it combines these blocks into the form of the original file (see Figure 3.11).
FIGURE 3.11 The client starts reading data blocks of the file from the identified data nodes.
To improve the read performance, HDFS tries to reduce bandwidth consumption by satisfying a read request from a replica that is closest to the reader. It looks for a block in the same node, then another node in the same rack, and then finally another data node in another rack. If the HDFS cluster spans multiple data centers, a replica that resides in the local data center (the closest one) is preferred over any remote replica from remote data center.
Handling Failures
On cluster startup, the name node enters into a special state called safe mode. During this time, the name node receives a heartbeat signal (implying that the data node is active and functioning properly) and a block-report from each data node (containing a list of all blocks on that specific data node) in the cluster. Figure 3.12 shows how all the data nodes of the cluster send a periodic heartbeat signal and block-report to the name node.
FIGURE 3.12 All data nodes periodically send heartbeat signals to the name node.
Based on the replication factor setting, each block has a specified minimum number of replicas to be maintained. A block is considered safely replicated when the number of replicas (based on replication factor) of that block has checked in with the name node. If the name node identifies blocks with less than the minimal number of replicas to be maintained, it prepares a list.
After this process, plus an additional few seconds, the name node exits safe mode state. Now the name node replicates these blocks (which have fewer than the specified number of replicas) to other data nodes.
Now let’s examine how the name node handles a data node failure. In Figure 3.13, you can see four data nodes (two data nodes in each rack) in the cluster. These data nodes periodically send heartbeat signals (implying that a particular data node is active and functioning properly) and a block-report (containing a list of all blocks on that specific data node) to the name node.
FIGURE 3.13 The name node updates its metadata based on information it receives from the data nodes.
The name node thus is aware of all the active or functioning data nodes of the cluster and what block each one of them contains. You can see that the file system namespace contains the information about all the blocks from each data node (see Figure 3.13).
Now imagine that data node 4 has stopped working. In this case, data node 4 stops sending heartbeat signals to the name node. The name node concludes that data node 4 has died. After a certain period of time, the name nodes concludes that data node 4 is not in the cluster anymore and that whatever data node 4 contained should be replicated or load-balanced to the available data nodes.
As you can see in Figure 3.14, the dead data node 4 contained blocks B and C, so name node instructs other data nodes, in the cluster that contain blocks B and C, to replicate it in manner; it is load-balanced and the replication factor is maintained for that specific block. The name node then updates its file system namespace with the latest information regarding blocks and where they exist now.
FIGURE 3.14 Handling a data node failure transparently.
Delete Files from HDFS to Decrease the Replication Factor
By default, when you delete a file or a set of files from HDFS, the file(s) get deleted permanently and there is no way to recover it. But don’t worry: HDFS has a feature called Trash that you can enable to recover your accidently deleted file(s). As you can see in Table 3.5, this feature is controlled by two configuration properties: fs.trash.interval and fs.trash.checkpoint.interval in the core-site.xml configuration file.
TABLE 3.5 Trash-Related Configuration
By default, the value for fs.trash.interval is 0, which signifies that trashing is disabled. To enable it, you can set it to any numeric value greater than 0, represented in minutes. This instructs HDFS to move your deleted files to the Trash folder for that many minutes before it can permanently delete them from the system. In other words, it indicates the time interval a deleted file will be made available to the Trash folder so that the system can recover it from there, either until it crosses the fs.trash.interval or until the next trash checkpoint occurs.
By default, the value for fs.trash.checkpoint.interval is also 0. You can set it to any numeric value, but it must be smaller than or equal to the value specified for fs.trash.interval. It indicates how often the trash checkpoint operation should run. During trash checkpoint operation, it checks for all the files older than the specified fs.trash.interval and deletes them. For example, if you have set fs.trash.interval to 120 and fs.trash.checkpoint.interval to 60, the trash checkpoint operation kicks in every 60 minutes to see if any files are older than 120 minutes. If so, it deletes that files permanently from the Trash folder.
When you decrease the replication factor for a file already stored in HDFS, the name node determines, on the next heartbeat signal to it, the excess replica of the blocks of that file to be removed. It transfers this information back to the appropriate data nodes, to remove corresponding blocks and free up occupied storage space. | http://www.informit.com/articles/article.aspx?p=2460260&seqNum=2 | CC-MAIN-2019-22 | refinedweb | 3,841 | 60.35 |
The key to this problem is how to characterize "slope" determined by two points. I see many posts here using either a
doubleto represent slope or
pair<int,int>with GCD process to make it unique. But this either has issues with numerical accuracy or having to do recursive calculation.
The idea here.
Note that this formulation is actually an extension to the traditional slope calculation
x/y which not only needs to deal with
y == 0as corner case but also unavoidably involves floating point numerical comparison.
Once the order is defined, it can be used as a key in
std::map for counting purpose to gather points along the same line with a reference point, i.e.,
- define
map<Point, int> countwith
count[Point(x,y)]being the number of points with slope
(x,y)to the reference point.
Note that we count singular slope
(0,0) (i.e., duplicated points) separately since this "special" slope is essentially undefined.
Since an ordered container
std::mapis used, the time complexity is
O(N^2*logN). The space complexity is
O(N)to store
map<Point, int> count.
int maxPoints(vector<Point>& pts) { int maxPts = 0; for (auto& i:pts) { map<Point, int, Comp> count; int dup = 0; for (auto& j:pts) maxPts = max(maxPts, i.x==j.x && i.y==j.y? ++dup : (++count[Point(i.x-j.x,i.y-j.y)]+dup)); } return maxPts; } // comparator for key (slope) in map struct Comp { bool operator()(const Point& a, const Point& b) { int64_t diff = (int64_t)a.x*b.y - (int64_t)a.y*b.x; // convert to 64bit int for int overflow return (int64_t)a.y*b.y>0? diff > 0 : diff < 0; } };
You still have a bug, though, due to overflow. For example you fail input (got fixed)
[[0,0],[65536,65536],[65536,131072]], returning 3 instead of the correct 2. (And the judge solution fails it as well, expecting 3.)
@StefanPochmann said in 6-line C++ concise solution, NO double hash key or GCD for slope:
[[0,0],[65536,65536],[65536,131072]]
Yeah, you are right. For variable range in
|x| <= MAX, expression
f = x1*x2-x3*x4 should be able to handle range
|f| <= 2*MAX^2. By definition of struct
Point, I modified
Comp to convert to
int64_t to handle
int overflow.
Thanks for the test case
[[0,0],[65536,65536],[65536,131072]] which should have been added to OJ's tests with expected result
2 (instead of
3).
@zzg_zzm Thanks, just added @StefanPochmann 's test case.
Does it work if count[XXX] gets increased first and then dup gets increased? Say count 0 => 10 then dup 0 => 10, maxPts should be 20 instead of 10.
@lxtbell2 said in 6-line C++ concise solution, NO double hash key or GCD for slope:
Does it work if count[XXX] gets increased first and then dup gets increased? Say count 0 => 10 then dup 0 => 10, maxPts should be 20 instead of 10.
I get your point. Yes, if the "anchor" point is duplicated with last point in vector
pts, we indeed miss some counting. However, this does not matter because we are looping through all points anyway. If not all points are duplicated, we must at some point use an "anchor" point which is not duplicated with
pts[N-1] and this will get correct max count at least once, so we can simply update global max count by a single line
maxPts = max(maxPts, i.x==j.x && i.y==j.y? ++dup : (++count[Point(i.x-j.x,i.y-j.y)]+dup)) inside double loops.
Well, I admit that it is not readable...The readable one should count
dup and max value in map
count[] before updating global max count.
Readable Version:
int maxPoints(vector<Point>& pts) { int maxPts = 0; for (auto& i:pts) { map<Point, int, Comp> count; int dup = 0, maxCount = 0; for (auto& j:pts) { if (i.x==j.x && i.y==j.y) ++dup; else maxCount = max(maxCount, ++count[Point(i.x-j.x,i.y-j.y)]); } maxPts = max(maxPts, maxCount + dup); } return maxPts; }
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/68076/6-line-c-concise-solution-no-double-hash-key-or-gcd-for-slope | CC-MAIN-2017-47 | refinedweb | 700 | 72.16 |
Django class based views for templates stored in a database.
Project description
Serve your single page Javascript applications from Django.
Documentation
The full documentation is at.
Requirements
- Django > 1.8
- A database engine such as MySQL
Quickstart
Install django-database-views using pip:
pip install django-database-views
Add it to your installed apps:
INSTALLED_APPS = ( ... 'database_views.apps.DatabaseViewsConfig', ... )
Create a model to store versions for your index template in your app’s models.py:
from database_views.models import AbstractTemplate class IndexTemplate(AbstractTemplate): class Meta: db_table = 'your_table_name' # For example 'index_template'.
Create a class-based view for your single page app in your app’s views.py and assign your model to its model property:
from database_views.views import DatabaseTemplateView from database_views.views import CachedTemplateResponse from myapp.models import IndexTemplate class IndexView(DatabaseTemplateView): app_name = 'main' model = IndexTemplate response_class = CachedTemplateResponse
Add a route for your index page view in your project’s urls.py file:
from myapp.views import IndexView urlpatterns = [ ... url(r'^$', IndexView.as_view()) ... ]
That’s it!! Go to your new route and you should see your single page app’s index template served. Please ensure that you configure the serving of your app’s static assets properly.
Features
- Easily serve your single page javascript applications from Django.
- Optionally cache your templates for a configurable amount of time.
- Works with ember-cli-deploy and more specifically with ember-cli-deploy-mysql.
Running Tests
To run tests use the following commands from the root of this project:
source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install -r requirements_test.txt (myenv) $ py.test
Credits
Tools used in rendering this package:
History
0.1.0 (2017-03-10)
- First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-database-views/0.1.2/ | CC-MAIN-2022-05 | refinedweb | 304 | 52.66 |
#.
C#)
Equals.
By definition, the short-circuiting boolean operators will only evaluate the second operand if the first operand can not determine the overall result of the expression.
It means that, if you are using && operator as firstCondition && secondCondition it will evaluate secondCondition only when firstCondition is true and ofcource the overall result will be true only if both of firstOperand and secondOperand are evaluated to true. This is useful in many scenarios, for example imagine that you want to check whereas your list has more than three elements but you also have to check if list has been initialized to not run into NullReferenceException. You can achieve this as below:
bool hasMoreThanThreeElements = myList != null && mList.Count > 3;
mList.Count > 3 will not be checked untill myList != null is met.
Logical AND
&& is the short-circuiting counterpart of the standard boolean AND (
&) operator.
var x = true; var y = false; x && x // Returns true. x && y // Returns false (y is evaluated). y && x // Returns false (x is not evaluated). y && y // Returns false (right y is not evaluated).
Logical OR
|| is the short-circuiting counterpart of the standard boolean OR (
|) operator.
var x = true; var y = false; x || x // Returns true (right x is not evaluated). x || y // Returns true (y is not evaluated). y || x // Returns true (x and y are evaluated). y || y // Returns false (y and y are evaluated).
Example usage
if(object != null && object.Property) // object.Property is never accessed if object is null, because of the short circuit. Action1(); else Action2();
Returns an
int holding the size of a type* in bytes.
sizeof(bool) // Returns 1. sizeof(byte) // Returns 1. sizeof(sbyte) // Returns 1. sizeof(char) // Returns 2. sizeof(short) // Returns 2. sizeof(ushort) // Returns 2. sizeof(int) // Returns 4. sizeof(uint) // Returns 4. sizeof(float) // Returns 4. sizeof(long) // Returns 8. sizeof(ulong) // Returns 8. sizeof(double) // Returns 8. sizeof(decimal) // Returns 16.
*Only supports certain primitive types in safe context.
In an unsafe context,
sizeof can be used to return the size of other primitive types and structs.
public struct CustomType { public int value; } static void Main() { unsafe { Console.WriteLine(sizeof(CustomType)); // outputs: 4 } }
Overloading just equality operators is not enough. Under different circumstances, all of the following can be called:
object.Equalsand
object.GetHashCode
IEquatable<T>.Equals(optional, allows avoiding boxing)
operator ==and
operator !=(optional, allows using operators)
When overriding
Equals,
GetHashCode must also be overriden. When implementing
Equals, there are many special cases: comparing to objects of a different type, comparing to self etc.
When NOT overridden
Equals method and
== operator behave differently for classes and structs. For classes just references are compared, and for structs values of properties are compared via reflection what can negatively affect performance.
== can not be used for comparing structs unless it is overridden.
Generally equality operation must obey the following rules:
Aalways equals
A(may not be true for
NULLvalues in some systems).
Aequals
B, and
Bequals
C, then
Aequals
C.
Aequals
B, then
Aand
Bhave equal hash codes.
Band
Care instances of
Class2inherited from
Class1:
Class1.Equals(A,B)must always return the same value as the call to
Class2.Equals(A,B).
class Student : IEquatable<Student> { public string Name { get; set; } = ""; public bool Equals(Student other) { if (ReferenceEquals(other, null)) return false; if (ReferenceEquals(other, this)) return true; return string.Equals(Name, other.Name); } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; return Equals(obj as Student); } public override int GetHashCode() { return Name?.GetHashCode() ?? 0; } public static bool operator ==(Student left, Student right) { return Equals(left, right); } public static bool operator !=(Student left, Student right) { return !Equals(left, right); } }
var now = DateTime.UtcNow; //accesses member of a class. In this case the UtcNow property.
var zipcode = myEmployee?.Address?.ZipCode; //returns null if the left operand is null. //the above is the equivalent of: var zipcode = (string)null; if (myEmployee != null && myEmployee.Address != null) zipcode = myEmployee.Address.ZipCode;
var age = GetAge(dateOfBirth); //the above calls the function GetAge passing parameter dateOfBirth.
var letters = "letters".ToCharArray(); char letter = letters[1]; Console.WriteLine("Second Letter is {0}",letter); //in the above example we take the second character from the array //by calling letters[1] //NB: Array Indexing starts at 0; i.e. the first letter would be given by letters[0].
var letters = null; char? letter = letters?[1]; Console.WriteLine("Second Letter is {0}",letter); //in the above example rather than throwing an error because letters is null //letter is assigned the value null
The operator for an "exclusive or" (for short XOR) is: ^
This operator returns true when one, but only one, of the supplied bools are true.
true ^ false // Returns true false ^ true // Returns true false ^ false // Returns false true ^ true // Returns false
The shift operators allow programmers to adjust an integer by shifting all of its bits to the left or the right. The following diagram shows the affect of shifting a value to the left by one digit.
Left-Shift
uint value = 15; // 00001111 uint doubled = value << 1; // Result = 00011110 = 30 uint shiftFour = value << 4; // Result = 11110000 = 240
Right-Shift
uint value = 240; // 11110000 uint halved = value >> 1; // Result = 01111000 = 120 uint shiftFour = value >> 4; // Result = 00001111 = 15
C#.
C# has several operators that can be combined with an
= sign to evaluate the result of the operator and then assign the result to the original variable.
Example:
x += y
is the same as
x = x + y
Assignment operators:
+=
-=
*=
/=
%=
&=
|=
^=
<<=
>>=
Returns one of two values depending on the value of a Boolean expression.
Syntax:
condition ? expression_if_true : expression_if_false;
Example:
string name = "Frank"; Console.WriteLine(name == "Frank" ? "The name is Frank" : "The name is not Frank");
The ternary operator is right-associative which allows for compound ternary expressions to be used. This is done by adding additional ternary equations in either the true or false position of a parent ternary equation. Care should be taken to ensure readability, but this can be useful shorthand in some circumstances.
In this example, a compound ternary operation evaluates a
clamp function and returns the current value if it's within the range, the
min value if it's below the range, or the
max value if it's above the range.
light.intensity = Clamp(light.intensity, minLight, maxLight); public static float Clamp(float val, float min, float max) { return (val < min) ? min : (val > max) ? max : val; }
Ternary operators can also be nested, such as:
a ? b ? "a is true, b is true" : "a is true, b is false" : "a is false" // This is evaluated from left to right and can be more easily seen with parenthesis: a ? (b ? x : y) : z // Where the result is x if a && b, y if a && !b, and z if !a
When writing compound ternary statements, it's common to use parenthesis or indentation to improve readability.
The types of expression_if_true and expression_if_false must be identical or there must be an implicit conversion from one to the other.
condition ? 3 : "Not three"; // Doesn't compile because `int` and `string` lack an implicit conversion. condition ? 3.ToString() : "Not three"; // OK because both possible outputs are strings. condition ? 3 : 3.5; // OK because there is an implicit conversion from `int` to `double`. The ternary operator will return a `double`. condition ? 3.5 : 3; // OK because there is an implicit conversion from `int` to `double`. The ternary operator will return a `double`.
The type and conversion requirements apply to your own classes too.
public class Car {} public class SportsCar : Car {} public class SUV : Car {} condition ? new SportsCar() : new Car(); // OK because there is an implicit conversion from `SportsCar` to `Car`. The ternary operator will return a reference of type `Car`. condition ? new Car() : new SportsCar(); // OK because there is an implicit conversion from `SportsCar` to `Car`. The ternary operator will return a reference of type `Car`. condition ? new SportsCar() : new SUV(); // Doesn't compile because there is no implicit conversion from `SportsCar` to SUV or `SUV` to `SportsCar`. The compiler is not smart enough to realize that both of them have an implicit conversion to `Car`. condition ? new SportsCar() as Car : new SUV() as Car; // OK because both expressions evaluate to a reference of type `Car`. The ternary operator will return a reference of type `Car`.
Gets
System.Type object for a type.
System.Type type = typeof(Point) //System.Drawing.Point System.Type type = typeof(IDisposable) //System.IDisposable System.Type type = typeof(Colors) //System.Drawing.Color System.Type type = typeof(List<>) //System.Collections.Generic.List`1[T]
To get the run-time type, use
GetType method to obtain the
System.Type of the current instance.
Operator
typeof takes a type name as parameter, which is specified at compile time.
public class Animal {} public class Dog : Animal {} var animal = new Dog(); Assert.IsTrue(animal.GetType() == typeof(Animal)); // fail, animal is typeof(Dog) Assert.IsTrue(animal.GetType() == typeof(Dog)); // pass, animal is typeof(Dog) Assert.IsTrue(animal is Animal); // pass, animal implements Animal
The built-in primitive data types, such as
char,
int, and
float, as well as user-defined types declared with
struct, or
enum. Their default value is
new T() :
default(int) // 0 default(DateTime) // 0001-01-01 12:00:00 AM default(char) // '\0' This is the "null character", not a zero or a line break. default(Guid) // 00000000-0000-0000-0000-000000000000 default(MyStruct) // new MyStruct() // Note: default of an enum is 0, and not the first *key* in that enum // so it could potentially fail the Enum.IsDefined test default(MyEnum) // (MyEnum)0
Any
class,
interface, array or delegate type. Their default value is
null :
default(object) // null default(string) // null default(MyClass) // null default(IDisposable) // null default(dynamic) // null
Returns a string that represents the unqualified name of a
variable,
type, or
member.
int counter = 10; nameof(counter); // Returns "counter" Client client = new Client(); nameof(client.Address.PostalCode)); // Returns "PostalCode"
The
nameof operator was introduced in C# 6.0. It is evaluated at compile-time and the returned string value is inserted inline by the compiler, so it can be used in most cases where the constant string can be used (e.g., the
case labels in a
switch statement, attributes, etc...). It can be useful in cases like raising & logging exceptions, attributes, MVC Action links, etc...
Introduced");
Postfix increment
X++ will add
1 to
x
var x = 42; x++; Console.WriteLine(x); // 43
Postfix decrement
X-- will subtract one
var x = 42 x--; Console.WriteLine(x); // 41
++x is called prefix increment it increments the value of x and then returns x
while
x++ returns the value of x and then increments
var x = 42; Console.WriteLine(++x); // 43 System.out.println(x); // 43
while
var x = 42; Console.WriteLine(x++); // 42 System.out.println(x); // 43
both are commonly used in for loop
for(int i = 0; i < 10; i++) { }
The
=> operator has the same precedence as the assignment operator
= and is right-associative.
It is used to declare lambda expressions and also it is widely used with LINQ Queries:
string[] words = { "cherry", "apple", "blueberry" }; int shortestWordLength = words.Min((string w) => w.Length); //5
When used in LINQ extensions or queries the type of the objects can usually be skipped as it is inferred by the compiler:
int shortestWordLength = words.Min(w => w.Length); //also compiles with the same result
The general form of lambda operator is the following:
(input parameters) => expression
The parameters of the lambda expression are specified before
=> operator, and the actual expression/statement/block to be executed is to the right of the operator:
// expression (int x, string s) => s.Length > x // expression (int x, int y) => x + y // statement (string x) => Console.WriteLine(x) // block (string x) => { x += " says Hello!"; Console.WriteLine(x); }
This operator can be used to easily define delegates, without writing an explicit method:
delegate void TestDelegate(string s); TestDelegate myDelegate = s => Console.WriteLine(s + " World"); myDelegate("Hello");
instead of
void MyMethod(string s) { Console.WriteLine(s + " World"); } delegate void TestDelegate(string s); TestDelegate myDelegate = MyMethod; myDelegate("Hello");
The assignment operator
= sets thr left hand operand's value to the value of right hand operand, and return that value:
int a = 3; // assigns value 3 to variable a int b = a = 5; // first assigns value 5 to variable a, then does the same for variable b Console.WriteLine(a = 3 + 4); // prints 7
The Null-Coalescing operator
?? will return the left-hand side when not null. If it is null, it will return the right-hand side.
object foo = null; object bar = new object(); var c = foo ?? bar; //c will be bar since foo was null
The
?? operator can be chained which allows the removal of
if checks.
//config will be the first non-null returned. var config = RetrieveConfigOnMachine() ?? RetrieveConfigFromService() ?? new DefaultConfiguration(); | https://sodocumentation.net/csharp/topic/18/operators | CC-MAIN-2020-29 | refinedweb | 2,129 | 58.08 |
Important: Please read the Qt Code of Conduct -
Why single mouse movement cause multiple event occurred
Here is a simple project for windows that trying to implement the simple dragging animation.
But when it run, the widget won't be dragged by the mouse. It looks like it translate the position to the cursor for a short time but translate back to its original position.
Look into the debug.txt, the event was sent multiple time about every move of the cursor, one occurred before the widget was moved and one after that moving. The first event's position info is about the old position of the widget but the second event's position info is about the new position of the widget. So the result is the widget was moved back to its original position by the second event process.
Although I got a solution about this "multiple event occurred" is using global position information of the event, i.e. ,
ev->x() ev->y() ev->pos() // change those to ev->globalX() ev->globalY() ev->globalPos()
I was wander why a single mouse movement will cause multiple event to occurred.
Hope I can get an answer about this question.
#include "customwidget.h" #include <QApplication> #include <QMainWindow> int main(int argc, char *argv[]) { QApplication a(argc, argv); // generate simple main window. QMainWindow w; // set the size of the window. w.setGeometry(0, 0, 800, 800); // generate the CustomWidget CustomWidget *widget = new CustomWidget(&w); // display the window containing the widget w.show(); return a.exec(); }
main.cpp
#ifndef CUSTOMWIDGET_H #define CUSTOMWIDGET_H #include <QWidget> #include <fstream> class CustomWidget : public QWidget { Q_OBJECT public: explicit CustomWidget(QWidget *parent = nullptr); ~CustomWidget(); protected: // define the painting agorithm to see the area of this widget void paintEvent(QPaintEvent* ev); // handle the pressing event to initialize the dragging algorithm // and to track the start of moving event void mousePressEvent(QMouseEvent* ev); // implement the dragging algorithm void mouseMoveEvent(QMouseEvent* ev); // handle the releasing event to track the end of moving event void mouseReleaseEvent(QMouseEvent* ev); private: std::ofstream fout; // open file "debug.txt" QPoint prev; // to save the previous point of cursor. }; #endif // CUSTOMWIDGET_H
customwidget.cpp
#include "customwidget.h" #include <QMouseEvent> #include <QPaintEvent> #include <QPainter> #include <QBrush> CustomWidget::CustomWidget(QWidget *parent) : QWidget(parent) { // open file for output fout.open("debug.txt"); // set the widget size and position setGeometry(0, 0, 100, 100); } CustomWidget::~CustomWidget() { // close file when program ended fout.close(); } void CustomWidget::paintEvent(QPaintEvent *ev) { // draw the area with blue color QPainter painter(this); QBrush brush(Qt::GlobalColor::blue); painter.setBrush(brush); painter.setBackground(brush); painter.drawRect(ev->rect()); } void CustomWidget::mousePressEvent(QMouseEvent *ev) { ev->accept(); // debug output fout << "pressed at (" << ev->x() << ',' << ev->y() << ") current mouse (" << ev->globalX() << ',' << ev->globalY() << ')' << std::endl; // initialize the dragging start point prev = ev->pos(); } void CustomWidget::mouseMoveEvent(QMouseEvent *ev) { ev->accept(); // get the cursor position of this event const QPoint& pos = ev->pos(); // debug output fout << "moved from (" << prev.x() << ',' << prev.y() << ") to (" << pos.x() << ',' << pos.y() << ") current mouse (" << ev->globalX() << ',' << ev->globalY() << ')' << std::endl; // calculate the cursor movement int dx = pos.x() - prev.x(); int dy = pos.y() - prev.y(); // move the widget position to match the direction of the cursor. move(geometry().x() + dx, geometry().y() + dy); // update the cursor position for the next event prev = pos; } void CustomWidget::mouseReleaseEvent(QMouseEvent *ev) { ev->accept(); fout << "released at (" << ev->x() << ',' << ev->y() << ") current mouse (" << ev->globalX() << ',' << ev->globalY() << ')' << std::endl; }
customwidget.h
pressed at (61,61) current mouse (69,92) moved from (61,61) to (61,61) current mouse (69,92) moved from (61,61) to (61,59) current mouse (69,90) moved from (61,59) to (61,61) current mouse (69,90) released at (61,59) current mouse (69,90)
debug.txt
Edited:
Include the current mouse global position output inside debug.txt.
As you can see the same position sent two event to the widget.
Hi and welcome to devnet,
What OS are you on ?
You might want to use a minimal distance before considering that the movement is valid. See the drag and drop examples.
The mouse sensitivity can partly explain why you see several events.
I'm using Windows 10.
Its hard to believe the event manager didn't do that for me, but I did some test without using mouse to move the cursor the result is only receive only one movement event.
So you are right, I have to do the movement validate process before processing the event.
Thanks for the help!
But this comes up with a new question for me:
If event manager can detect the movement of the mouse even if that movement for system is like no movement, then can I get the exact movement with better precision?
Do you mean native events ?
QAbstractNativeEventFilter might be of interest. | https://forum.qt.io/topic/95856/why-single-mouse-movement-cause-multiple-event-occurred | CC-MAIN-2021-04 | refinedweb | 797 | 54.73 |
Hello everyone! I am new to Java so this may be a very easy question.
I am trying to write a program and display just the remainder after two numbers have been divided. Here is where i am:
import java.util.Scanner;
public class Midterm1
{
public static void main( String[] args )
{
Scanner input = new Scanner( System.in );// starts scanner
int input1;// user inputed number 1
int input2;// user inputed number 2
int answer;
System.out.print( "Enter input1: "); //prompt
input1 = input.nextInt(); // read number from user
System.out.print( "Enter input2: "); //prompt
input2 = input.nextInt(); // read number from user
answer = input1 % input2;
if( input1 % input2 == 0)
System.out.println( "No Remainder" );
if( input1 % input2 != 0)
System.out.printf( "%s%d\n", "The Remainder is:", answer);
}
}
For some reason instead of displaying the remainder it is displaying "input1".
Thank you for you help! | https://www.daniweb.com/programming/software-development/threads/264354/displaying-a-remainder-in-java | CC-MAIN-2017-26 | refinedweb | 142 | 52.26 |
Extended Euclidean Algorithm for Univariate Polynomials with Coefficients in a Finite Field
Consider the following snippet, intended to compute the extended euclidean algorithm for polynomials in $F_q[x]$. GFX.<x> = GF(5)[x] g = x^3 + 2*x^2 - x - 2 h = x - 2 r,s,t = extended_euclides(f,g) print("The GCD of {} and {} is {}".format(g,h,r)) print("Its Bézout coefficients are {} and {}".format(s,t)) assert r == gcd(g,h), "The gcd should be {}!".format(gcd(g,h)) assert r == s*g + t*h, "The cofactors are wrong!"
Executing this snippet produces the following output:
The GCD of x^3 + 2*x^2 + 4*x + 3 and x + 3 is 3*x + 1 Its Bézout coefficients are 2*x + 4 and 3*x^2 + 4 Error in lines 45-45 Traceback (most recent call last): File "/cocalc/lib/python2.7/site-packages/smc_sagews/sage_server.py", line 1188, in execute flags=compile_flags) in namespace, locals File "", line 1, in <module> AssertionError: The gcd should be 1!
That is, my code disagrees with sagemath's
gcd function, and I think sagemath's result is correct;
g(2)!=0, so
g and
h should not have common factors.
Why is mine wrong? | https://ask.sagemath.org/question/44376/extended-euclidean-algorithm-for-univariate-polynomials-with-coefficients-in-a-finite-field/?sort=votes | CC-MAIN-2020-34 | refinedweb | 203 | 55.64 |
The new anext() builtin in Python 3.10.0a7 doesn't seem to work properly when a default value is provided as the second argument. Here's an example:
import asyncio
async def f():
yield 'A'
yield 'B'
async def main():
g = f()
print(await anext(g, 'Z')) # Prints 'None' instead of 'A'!!!
print(await anext(g, 'Z')) # Prints 'None' instead of 'B'!!!
print(await anext(g, 'Z')) # Prints 'Z'
g = f()
print(await anext(g)) # Prints 'A'
print(await anext(g)) # Prints 'B'
print(await anext(g)) # Raises StopAsyncIteration
asyncio.run(main())
As indicated above, anext() works fine when no default is given (in the second half of main()), but produces None in every iteration when a default is given (in the first half of main()) except when the iterator is exhausted. | https://bugs.python.org/msg390349 | CC-MAIN-2021-17 | refinedweb | 133 | 68.91 |
Here is the code that helps to read any document (like .doc, .rtf , txt ) from specified location. This is a web based application and this code is written in C# as code behind in ASP.Net 2.0, where the word document is hard to upload from client side. Here is the code that uploads the document file and stores it into a string and from that I have placed that string into a textbox. The First Step is that, we need to add a COM reference (that’s how we need to define the word application) to the project by right clicking in the solution explorer on References->Add Reference. Click on the COM tab and look for the Microsoft Word 11.0 Object Library. Click Select
and OK.
and OK.
Now u need to add the line <identity impersonate="true"/>
Like:
<system.web>
<identity impersonate="true"/>
<%@ Page <html xmlns=""> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:FileUpload<br /> <asp:Button<br /> <asp:TextBox</asp:TextBox> <.Interop.Word; public partial class ReadWordDocument : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void btnRead_Click(object sender, EventArgs e) { ApplicationClass wordApp = new ApplicationClass(); // Input box is used to get the path of the file which has to be // uploaded into textbox. string filePath = FileUpload1.PostedFile.FileName; object file = filePath; object nullobj = System.Reflection.Missing.Value; // here on Document.Open there should be 9 arg. Document doc = wordApp.Documents.Open(ref file, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj); // Here the word content is copeied into a string which helps to // store it into textbox. Document doc1 = wordApp.ActiveDocument; string m_Content = doc1.Content.Text; // the content is stored into the textbox. TextBox1.Text = m_Content; doc.Close(ref nullobj, ref nullobj, ref nullobj); } }
this code is Really help ful
how to maintain view state of doc ???
this code is not working.
What is the error message you are getting ?
I'm using IIS server.
Do one thing,pass finename as hardcoded instead of selecting it from FileUpload control.like below
object file = @"C:\1.doc";
I'm giving file like below
string filePath = Server.MapPath("") + "\\ModelReports\\" + "xyz.rtf";
object file = filePath;
For testing purpose only.Use hardcoded value and check.Code is working or not?
string filePath = @"C:\xyz.rtf";
ok
no,same problem
Are you using Microsoft.Office.Interop.Word or Microsoft Office Object Library...
If using Microsoft.Office.Interop, you need to install it on server and point your reference dll.
i'm using Microsoft Office Object Library 12.0.
This code is working fine on my local machine.Can you test the code on local machine.
ya, this code is working fine on my local machine.
ohhhhhhh ...)
I thought the problem is in the code.For this office should be installed on the production server.
yes, office is installed on server machine MS office 2003 and 2007
Have u entered this into web.config
<identity impersonate="true"/>
yes this tag have been added in web.config as u instructed.... i m using VS2010
let me clear which reference hv to add Microsoft.Office.Interop.Word or Microsoft Office Object Library... because using only Microsoft Office Object Library... not possible to use ur sample code...
Check event viewer on production server and send me the error details
I am using Microsoft Office Object Library
if i use only Microsoft Office Object Library 12.0 in VS 2008 then i didnt find ApplicationClass and other classes used in above sample code... & in namespace there is "using Microsoft.Office.Core;" instead of "using Microsoft.Office.Interop.Word;"
It's strange I am using Microsoft Office Object Library 12.0 .What is the error message in eventviewr?
As you told me that this code is working fine on local machine,but not working on production.
So issue is on production,
1.check the current logged in user have permission or not.
2.Grant permission to asp_net user.
this code is only working on local system,not in server.
i use this code for share point web part but
"Document aDoc = WordApp.Documents.Open(ref fileName, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref missing, ref missing, ref missing, ref missing);"
return null.
@Alok:It's look like that there is some configuration mismatch between your local machine and server
really thank so much for prgm it helped me lot and works fine gud and once again thank u so much :)
Hi to everyone...
I have use this code on local machine its work...but it is not working when i deploy my application on ftp sever... have you any idea about
?
hai i am using vs2012 and ms2010.i tried this code and not suceeded.iam getting error
Error 1 Interop type 'Microsoft.Office.Interop.Word.ApplicationClass' cannot be embedded. Use the applicable interface instead. c:\users\sevak\documents\visual studio 2012\Projects\sampleviewquiz\sampleviewquiz\WebForm1.aspx.cs 21 13 sampleviewquiz
Error 2 The type 'Microsoft.Office.Interop.Word.ApplicationClass' has no constructors defined c:\users\sevak\documents\visual studio 2012\Projects\sampleviewquiz\sampleviewquiz\WebForm1.aspx.cs 21 36 sampleviewquiz
Error 3 Interop type 'Microsoft.Office.Interop.Word.ApplicationClass' cannot be embedded. Use the applicable interface instead. c:\users\sevak\documents\visual studio 2012\Projects\sampleviewquiz\sampleviewquiz\WebForm1.aspx.cs 21 40 sampleviewquiz
Error 4 'Microsoft.Office.Interop.Word.ApplicationClass' does not contain a definition for 'Documents' and no extension method 'Documents' 25 30 sampleviewquiz
Error 5 'Microsoft.Office.Interop.Word.ApplicationClass' does not contain a definition for 'ActiveDocument' and no extension method 'ActiveDocument' 40 31 sampleviewquiz
For the ApplicationClass error in Microsoft.office.interop.word. Go to the 12.0 reference and (F4) go to properties change 'Embed Interop types' as False.
Code not word in IIS7
How to fix program?
There will be namespace error in Microsoft visual studio 2010.... so what can i do? plz suggest me solution of that problem | http://aspdotnetcodebook.blogspot.com/2008/08/how-to-read-word-document-like-doc-rtf.html | CC-MAIN-2017-47 | refinedweb | 1,030 | 52.97 |
I started diving into Qt again last night, and a few things made me wonder how it works. For example: when the UI is designed in the Designer View, is the .ui file (an XML schema) compiled into a header/source file that contains all of the controls as if instantiated in-code? The reason I ask is because the .ui file's corresponding header/source pair's class is also prototyped in the scope of the UI namespace. Then, a data member is created from that, called "ui" within the class itself. That ui data member appears to contain a pointer for each QWidget added in the Designer. For example, if I drag a QPushButton into the Designer, and name it as pushButton in its Properties Inspector, the ui member will have a ui->pushButton member pointer within it of type QPushButton. Funny thing is, I couldn't find where the "pushButton" member was declared in-code. This leads me to think that Qt Creator's Intellisense is smart enough to parse the XML file, and a class being changed behind the scenes, or at least at compile time.
Then, there are, signals and slots. It sounds like they have some sort of special definition that's triggered by the Q_WIDGET macro. Also, QWidget::connect()'s 2nd and 4th parameters are of type const char*, and it's common practice to use the SIGNAL() and SLOT() macros to pass through prototypes of functions (not methods because they don't appear to be associated with any class). I understand that that I'm calling two functions that these widgets seem to have, but how does that work under-the-hood, exactly?
Does Qt rely on some special C++ compiler, or special rules? | https://www.gamedev.net/topic/659376-understanding-qt-at-a-deeper-level/ | CC-MAIN-2017-04 | refinedweb | 292 | 69.82 |
Hi i pip installed python decouple (I am not using a virtual environment)
I made an .env file in my django root directly where the manage.py file sits.
used this tutorial here : Secure Sensitive Django Variables using Python Decouple
My env file:
SECRET_KEY=adasdasdasd (Without quotes) DEBUG=True
My settings file:
from pathlib import Path import os from decouple import config # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = config('SECRET_KEY') # SECURITY WARNING: don't run with debug turned on in production! DEBUG = config('DEBUG', cast = bool)
Here is the Error Image:
SOLVED it by:
actually i made a new file by opening cmd prompt and then typing type nul > .env it wont let me rename or create otherwise. Thanks a lot solved :) | https://forum.djangoproject.com/t/python-decouple-error-debug-not-found-solved/8193 | CC-MAIN-2022-21 | refinedweb | 127 | 56.15 |
Step-by-step
Navigate to the web console for your ICP installation
Open a web browser to the IBM Cloud Private management console at. Login using your admin user id and password.
Create a namespace for your deployment
- Open the menu in the IBM Cloud Private web console.
- Select the Manage group and then select the Namespaces page.
- Select the Create Namespace button
- Specify a name for the new namespace, in this example the name dev-deploys is used.
- Select the Create button
Create a certificate in the new namespace
The certificate will be used for TLS termination of the deployed application. The certificate manager shared service is used to create and manage the certificate. Details on creating certificates are available in the IBM Cloud Private Knowledge Center.
- Open a command window for the IBM Cloud Private installation by selecting the icon on the bottom right of the web console.
- Create a file named cert.yaml that contains the following contents which is provided as a sample certificate in the Knowledge Center.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: hello-deployment-tls-1
namespace: foobar
spec:
# name of the tls secret to store
# the generated certificate/key pair
secretName: hello-deployment-tls-1
issuerRef:
# ClusterIssuer Name
name: icp-ca-issuer
# Issuer can be referenced
# by changing the kind here.
# the default value is Issuer (i.e.
# a locally namespaced Issuer)
kind: ClusterIssuer
commonName: "foo1.bar1"
dnsNames:
# one or more fully-qualified domain names
# can be defined here
- foo1.bar1
Edit the lines in the sample certificate file as follows
- Specify a custom certificate name by updating the name field from hello-deployment-tls-1 to your customized value.
- Specify the correct namespace by updating the namespace value from foobar to the namespace created in step 2.
- Specify a custom secret name by updating the secretName field from hello-deployment-tls-1 to your customized value.
- Change the commonName field from foo1.bar1 to the host plus domain name used to access the application on this cluster.
- Change the entries under dnsNames so foo1.bar1 is no longer included but any other hosts that can be used to access the application must be added.
- Save the file and run the command:
kubectl create -f cert.yaml
to create the certificate.
- Validate the certificate was created successfully by running the command below substituting the namespace name you created previously for dev-deploys.
kubectl describe certificate -n dev-deploys
Validate the Status shows the certificate was created.
Deploy your application
This sample deploys the ibm-jenkins-dev chart, but you can extend this example to apply to other charts as well. Keep the command window open on the current browser window since it will continue to be used later. Open another browser window and access the IBM Cloud Private web console again from the new window.
Install the ibm-jenkins-dev chart
- Select the Catalog link from the banner of the IBM Cloud Private web console
- Select the ibm-jenkins-dev chart
- Select the Configure button
- Specify a Helm release name
- Select the namespace created in step 2
- Select the checkbox to agree to the license agreement
- Under Parameters, select “All parameters” to expand all parameters.
- Unselect “Enable persistence for this deployment” since this sample install will not use persistence
- Select Install
Validate the deployment completes successfully
- Select the View Helm Release link
- Wait a few minutes for the pod to change to the “Running” state
- Step 2 in the Notes at the bottom of the deployment page shows how to obtain the NodePort that can be used to access the jenkins server. Run the listed command to obtain the port value.
- Open a new browser window and navigate to the URL http://<external_ip>:<port_value> where <port_value> is the port obtained in the previous step.
Validate that you can access the jenkins server’s web console.
Create an ingress for TLS termination
Instead of accessing the jenkins server over HTTP, we want to provide a secure connection over HTTPS. This step creates the ingress definition necessary to expose the jenkins server over HTTPS. The HTTPS connection uses the certificate created in step 3.
- In the command window for IBM Cloud Private edit a new file named ingress.yaml
- Paste in the contents from the Ingress sample found in the Knowledge Center that is also included here:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-k8s-ingress-tls
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/rewrite-target: "/"
spec:
tls:
- hosts:
- foo1.bar1
secretName: hello-k8s-ingress-tls-1
rules:
- host: foo1.bar1
http:
paths:
- backend:
serviceName: hello-world-svc
servicePort: 80
path: /fb
Make the following changes to the sample ingress
- Change the name field’s value from hello-k8s-ingress-tls to your custom value
- Add a namespace entry below the name similar to the entry below:
namespace: dev-deploys
Make sure you use your namespace name. In this sample the namespace is dev-deploys
- Below the “ -hosts” line, change the hostname foo1.bar1 to the common name that you used in your certificate
- Change the secretName to the name of the secret defined in the certificate’s secretName field from recipe step 3.
- Below the “ rules:” line, change the host entry from foo1.bar1 to the same value used under “ -hosts“.
- Change the serviceName to match the service name created by the deployment. The service name is based off of the helm release name you chose when you deployed the jenkins server. Obtain the service name from the Service table listed on the deployment page for the helm chart.
- Change the servicePort value from 80 to 8080
- Change the path from /fb to /
- Save the changes
- Create the ingress with the command:
kubectl create -f ingress.yaml
Validate the ingress
Make sure the ingress is working properly and is exposing the jenkins server using the certificate created for the ingress.
- Open a new web browser windows to https://<hostname>
- The jenkins login page appears
- Select the certificate from the browser and view its details to make sure it is the certificate that was defined in step 3
Optional: Delete the NodePort to prevent HTTP access to the server
At this point the jenkins server is exposed over HTTP and HTTPS at our external hostname. The ingress was used to expose the jenkins application over HTTPS. The service created by the deployment exposes jenkins over HTTP. Follow these steps to remove the HTTP access defined in the service.
- Using the command window, run this command to edit the service:
kubectl edit service -n dev-deploys dev-jenkins-ibm-jenkins
Change the dev-deploys namespace and the dev-jenkins-ibm-jenkins values shown above to your namespace and helm chart service name respectively.
- Change the value of the type field from NodePort to ClusterIP
- Delete the lines containing a nodePort field.
Example: nodePort: 31714
- Save and quit the edit session
Follow the steps below to make sure the service type was changed to ClusterIP successfully.
- From the IBM Cloud Private web console, select the Workloads menu entry and select Helm Release.
- Find the release you deployed for this sample and select the link for the release name.
- Click the link under Service and make sure the Type is ClusterIP. | https://developer.ibm.com/recipes/tutorials/tls-termination-for-your-kubernetes-application-with-certmanager-on-ibm-cloud-private/ | CC-MAIN-2020-50 | refinedweb | 1,209 | 53 |
Many new features in the last few versions of C# had made it possible to write shorter and simpler code. C# 9 is no exception to that. Records, new types of patterns, and some other features can make your code much more expressive. In certain cases, you can avoid writing a lot of boilerplate code that was previously needed.
The best way to create immutable types before C# 9 was to create a class with read-only properties that were initialized in the constructor:
public class Person
{
public Person(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastName;
}
public string FirstName { get; }
public string LastName { get; }
}
Such a class would not allow changing its properties after it was created:
var person = new Person("John", "Doe");
person.FirstName = "Jane"; // does not compile
In a class with many properties, it can be inconvenient to initialize them all with a constructor because that constructor will have just as many parameters – one for each property. Unless you explicitly name the parameters when calling the constructor, it can become difficult to know which parameter initializes which property. For properties with reasonable default values, the object initializer syntax is commonly used instead of optional parameters to reduce the total number of parameters in the constructor:
var person = new Person("John", "Doe")
{
MiddleName = "Patrick"
};
Unfortunately, this does not work with a read-only property. To assign a value to a property in the object initializer, that property must have a setter, or the code above will not compile:
public string? MiddleName { get; set; }
However, such a property can be modified after the object has been created. Hence, the only way to create an immutable type before C# 9 was to make all its properties read-only (i.e., without a setter) and initialize them in the constructor.
C# 9 changes that by introducing init-only properties. Their setter is declared with the init keyword instead of the set keyword:
public string? MiddleName { get; init; }
Init-only properties can be initialized in the object initializer but cannot be modified after that:
var person = new Person("John", "Doe")
{
MiddleName = "Patrick"
};
person.MiddleName = "William"; // does not compile
This allows you to create immutable types with properties that do not necessarily need to be initialized in the constructor because they have valid default values.
C# 9 adds a new keyword for declaring types: record. These types are still classes (i.e., reference types) but they have some additional features and differences in functionality.
Optional simplified syntax for declaring a constructor is one such feature:
public record Person(string FirstName, string LastName);
Records still support regular property and constructor syntax just like classes. The following more verbose code can be used create an equivalent record to the one above:
public record Person
{
public Person(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastName;
}
public string FirstName { get; init; }
public string LastName { get; init; }
public void Deconstruct(out string firstName, out string lastName)
{
firstName = FirstName;
lastName = LastName;
}
}
This means that when the shorter syntax is used, the compiler automatically creates a constructor with parameters listed in the parenthesis of the record declaration, as well as matching init-only properties. That is the reason why with this shorter syntax, instead of camel case, Pascal case is typically used for parameter names.
Additionally, the compiler also creates a Deconstruct method which can be used to deconstruct the record into a matching tuple:
var person = new Person("John", "Doe");
(var firstName, var lastName) = person;
Both the constructor and the Deconstruct method have well-defined order of their parameters. That is why the records using the shorter syntax that automatically generates the two, are also called positional records.
Positional records can of course still have additional properties that are not initialized with the constructor:
public record Person(string FirstName, string LastName)
{
public string? MiddleName { get; init; }
}
These additional properties will not be included in the generated Deconstruct method.
To create a similar immutable type before C# 9, you would have to write a lot more code yourself:
Since immutable types do not allow any modifications, creating new instances of them becomes a much more common operation, especially new instances initialized with data from an existing instance and with only some of the properties modified:
var person = new Person("John", "Doe")
{
MiddleName = "Patrick"
};
var modifiedPerson = new Person(person.FirstName, person.LastName)
{
MiddleName = "William"
};
The syntax above is quite verbose and error prone, but it can be somewhat simplified if we add the following helper method to our immutable class:
public Person With(
string? firstName = null,
string? lastName = null,
string? middleName = null)
{
return new Person(
firstName != null ? firstName : this.FirstName,
lastName != null ? lastName : this.LastName)
{
MiddleName = middleName != null ? middleName : this.MiddleName
};
}
When invoking it, you could only specify the properties that you want to change:
var modifiedPerson = person.With(middleName: "Patrick");
With records in C# 9, you can now achieve that without writing a helper method for each immutable class and with an even nicer syntax by taking advantage of the new with expression:
var modifiedPerson = person with
{
MiddleName = "Patrick"
};
The with expression is automatically available for any record and can be used to create a copy of it with any number of its init-only properties modified. In C# 9, with expressions only support record types.
There is one final difference between classes and records in C# 9.
Classes implement reference equality which distinguishes between two different instances even if all the properties have the same values:
var person1 = new Person("John", "Doe");
var person2 = new Person("John", "Doe");
var areEqual = person1.Equals(person2); // = false
In contrast to that, records implement value equality which means that two instances are treated as equal if all of their properties are equal (that’s how structs behave):
var person1 = new Person("John", "Doe");
var person2 = new Person("John", "Doe");
var areEqual = person1.Equals(person2); // = true
To make a class behave like that, you would have to override its Equals method:
public override bool Equals(object? obj)
{
if (!(obj is Person other))
{
return false;
}
return this.FirstName == other.FirstName
&& this.LastName == other.LastName
&& this.MiddleName == other.MiddleName;
}
When you override the Equals method, you must also override the GetHashCode method or the collection classes that depend on them will start to behave incorrectly:
public override int GetHashCode()
{
return this.FirstName.GetHashCode()
^ this.LastName.GetHashCode()
^ (this.MiddleName?.GetHashCode() ?? 0);
}
Again, that is a lot of code that you do not have to write and maintain for records if you need value equality for your reference types.
There is a caveat, though. Value equality only works well for reference types if they are immutable. The problem is that changing a value of a property will change the result of the GetHashCode method for that instance. This will cause problems if that instance is in a collection similar to what would happen if the GetHashCode method isn’t overridden to match the Equals method implementation:
var person = new Person("John", "Doe");
var set = new HashSet<Person>();
set.Add(person);
var setContainsBefore = set.Contains(person); // = true
person.FirstName = "Patrick";
var setContainsAfter = set.Contains(person); // = false
Although the modified instance is obviously still in the set, the set’s Contains method cannot find it because the GetHashCode method of the instance returns a different value than it did when the instance was put into the set before the change.
To avoid problems like this, you should use positional records and only add read-only or init-only properties to them so that they are immutable.
Pattern matching was first added to C# in version 7. Since then, the feature has been improved with every version to make the code more expressive. In C# 9, two new patterns have been introduced.
The relational pattern allows you to use relational operators as part of the pattern. In C# 8 you could only do that by adding a when clause to a case statement:
var unit = duration.TotalMinutes switch
{
double d when d < 1 => DurationUnit.Seconds,
double d when d < 60 => DurationUnit.Minutes,
double d when d < 24 * 60 => DurationUnit.Hours,
double d when d >= 24 * 60 => DurationUnit.Days,
_ => DurationUnit.Unknown
};
In C# 9, this code can be further simplified by omitting the when clause and putting the comparison in the pattern itself:
var unit = duration.TotalMinutes switch
{
< 1 => DurationUnit.Seconds,
< 60 => DurationUnit.Minutes,
< 24 * 60 => DurationUnit.Hours,
>= 24 * 60 => DurationUnit.Days,
_ => DurationUnit.Unknown
};
Logical patterns add support for using logical operators in the pattern. This allows you to combine multiple conditions in a single pattern. The feature is particularly useful in a switch expression. In switch statements, case statement fall-through is an alternative to the or operator:
var weaponType = WeaponType.Unknown;
switch (weapon)
{
case Bow _:
case Crossbow _:
weaponType = WeaponType.Ranged;
break;
case Sword _:
weaponType = WeaponType.Melee;
break;
}
Since switch expressions do not have an equivalent for the statement fall-through syntax, it was necessary to repeat the expression body in such a scenario:
var weaponType = weapon switch
{
Bow _ => WeaponType.Ranged,
Crossbow _ => WeaponType.Ranged,
Sword _ => WeaponType.Melee,
_ => WeaponType.Unknown
};
With the introduction of logical patterns in C# 9, the first two cases in the code block above can be combined into one:
var weaponType = weapon switch
{
Bow or Crossbow => WeaponType.Ranged,
Sword => WeaponType.Melee,
_ => WeaponType.Unknown
};
You can also notice that there is no discard (_) in the patterns anymore. That is another improvement to pattern matching in C# 9. In type patterns, the discard can be omitted when the case body does not reference the typed value.
Two types of target-typed expressions were added to C# 9.
Target-typed new expressions are applicable to more use cases. They allow you to omit the type specification from the constructor call when the type being constructed can be implied from the context:
Person person = new("John", "Doe");
In the case above, this new feature does not bring much benefit since there was no need to repeat the type definition even before if you used the var keyword to implicitly type the variable instead:
var person = new Person("John", "Doe");
The two variants are of very similar length and it is only a matter of taste which one you prefer. We are certainly more used to the second one because it was already available before C# 9.
There are other contexts in which the target-typed new expression makes much more sense. In my opinion, the new syntax brings the most benefits in collection initializers:
var persons = new List<Person>() { new("John", "Doe"), new("Jane", "Doe") };
Before C# 9, you had to repeat the type definition for every item in the list. Now, it only needs to be specified once to declare the collection type. It can be omitted for all items if their type matches the collection’s element type.
The second type of target-typed expressions is the target-typed conditional expression. Its main benefit is that certain conditional expressions which required a cast before C# 9, now simply work without it:
// compiles in C# 9 only, doesn’t work in earlier versions
int? length = string.IsNullOrEmpty(input) ? null : input.Length;
Before C# 9, it was necessary to cast the null value to make such an expression valid:
// already works before C# 9
int? length = string.IsNullOrEmpty(input) ? (int?)null : input.Length;
Of course, even in C# 9, the cast is still necessary if you implicitly type the local variable using the var keyword because the type cannot be determined from the target variable:
// already works before C# 9
var length = string.IsNullOrEmpty(input) ? (int?)null : input.Length;
It is a minor change, but it can still make the code slightly more readable in certain cases.
The final C# 9 feature that I am going to cover in this article is support for top-level programs.
Before C# 9, any C# program required a static Main method as its entry point:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}
With C# 9, that is not required anymore. Any C# program can have a single file with code placed outside any class method:
System.Console.WriteLine("Hello World!");
That file will serve as the program entry point. If there is more than one such file in the program, the build will fail because the compiler cannot determine which file should act as the entry point.
This feature is particularly beneficial to beginners who do not need to learn about the Main method to write their first C# program.
However, the top-level program syntax can also be used in more complex applications. The code in the file can be asynchronous, i.e., it can use the await keyword. Also, the args variable containing the command line arguments is implicitly available in the code. Hence the following lines of code are a working program that writes the contents of a file to standard output:
using static System.IO.File;
using static System.Console;
var text = await ReadAllTextAsync(args[0]);
WriteLine(text);
I added the using static directives at the top of the file so that I do not have to specify the class name when calling the static methods.
Although it has not been long since the release of C# 9, the language design team is already thinking about features for future versions of C#: 10 and beyond. Below is a selection of what has already been mentioned. None of these features have yet been confirmed for C# 10 or any other future version of the language. It is just a look into what the team is thinking about. To learn more, you can check the C# Language Design GitHub repository. There is also a video available in which, towards the end, Mads Torgersen talks about these features and more.
Several other features related to newly introduced records might be added in the future:
As a shorthand for init-only properties a new data keyword might be introduced:
public data string? MiddleName;
This would be equivalent to the currently supported longer syntax:
The short constructor syntax without the body that is currently only supported for positional records might become more generally available for other types such as classes and structs, for example:
public class Person(string firstName, string lastName)
{
public string FirstName { get; } = firstName;
public string LastName { get; } = lastName;
}
For even more flexibility, this syntax might not automatically generate and initialize init-only properties for you. Instead, the constructor parameters could be used for initializing properties you declare yourself.
The with expression is currently only supported for records. You cannot add any code to classes yourself that would allow you to use the same syntax. The introduction of special factory methods would change that:
public class Person
{
// ...
[Factory]
public Person Copy()
{
return new Person(FirstName, LastName)
{
MiddleName = MiddleName
};
}
}
A factory method would be an instance method that returns a new instance of the same type. In the code snippet above, it is annotated with a Factory attribute. A new language keyword might be used instead.
You can already write such a method. But with the compiler recognizing it as such, it would allow you to use the initializer syntax to modify the created instance, including any init-only properties:
var person = new Person("John", "Doe");
var modifiedPerson = person.Copy()
{
FirstName = "Patrick"
};
While the syntax is different from with expressions, it provides you the same feature set.
With the introduction of init-only properties which can only be initialized during object construction, there is still one feature missing. If you fully initialize the object in the constructor, you can add validation code at the end of it, to make sure that the values of all properties are valid once the initialization is complete.
The final initializers would allow you to do that even when using object initializers. The code inside them would run after the object initializer has already been run in full:
public record Person(string FirstName, string LastName)
{
public string? MiddleName { get; init; }
init
{
if (MiddleName != null && MiddleName.Length < 2)
{
throw new ArgumentException("Middle name not long enough.");
}
}
}
Currently, records are always classes, i.e., reference types. This might also change in the future with the introduction of record structs which would be value types like regular structs.
Interfaces might be extended with support for static members, including operators. One of the use cases that this would enable are generic numeric algorithms.
Implementation details would be based on algebraic structures. To give you an example how that would work, here is an interface describing one of the simplest algebraic structures, a monoid:
interface IMonoid<T>
{
static T Zero { get; }
static T operator +(T x, T y)
}
The structure consists of an associative operation (meaning that the order of performing the operation when there is more than two operands does not affect the result) and an identity element for this operation (meaning that it does not change the second operand when used with the said operation). As an example, addition is an associative operation for integer numbers and 0 is an identity element for addition because adding 0 to any other element does not change its value.
This means that the built-in int datatype could implement this interface:
struct Int32 : IMonoid<Int32> // ...
{
// ...
}
All of this would allow implementation of generic methods for numeric operations:
public static T AddAll<T>(T[] operands) where T: IMonoid<T>
{
T result = T.Zero;
foreach (T operand in operands)
{
result += operand;
}
return result;
}
But there is no need to worry if algebraic structures are not your strong point. The base class library would be extended with all the common structures and operations, implemented by built-in data types and ready for use.
With built-in numeric datatypes implementing the IMonoid<T> interface, you could write a method like AddAll to be used with all of them, including decimal and even Complex. Today, you need to write overloads of such methods for every data type you want to support.
As a bonus, you could create a custom data type that implements the same interface, e.g. the IMonoid<T> algebraic structure, and any methods written for IMonoid<T> would automatically work with your data type as well.
Pattern matching and the switch expression in combination with the ever-increasing set of supported pattern types allow writing code that is in many ways similar to what is possible in functional languages such as F#. However, there is still an important feature missing which in many cases prevents you from writing an exhaustive set of cases without including a “catch-all” option.
Imagine having multiple types implementing an IShape interface:
public class Circle : IShape
{
public double Radius { get; set; }
}
public class Rectangle : IShape
{
public double Width { get; set; }
public double Height { get; set; }
}
public class Triangle : IShape
{
public double A { get; set; }
public double B { get; set; }
public double C { get; set; }
}
You can now write a single switch expression to calculate the perimeter of these shapes:
var perimeter = shape switch
{
Circle circle => 2 * Math.PI * circle.Radius,
Rectangle rectangle => 2 * (rectangle.Width + rectangle.Height),
Triangle triangle => triangle.A + triangle.B + triangle.C,
_ => throw new NotImplementedException(),
};
Although, you know that there are only three different types of shapes in your code, the compiler does not have this information and gives you a warning that your switch expression is not exhaustive unless you add the final case that catches any types you have not explicitly handled before.
F# solves this problem with discriminated unions which allow you to define a finite set of shapes unlike the inheritance approach in C#:
type Shape =
| Circle of radius : double
| Rectangle of width : double * height : double
| Triangle of a : double * b : double * c : double
Based on this information, the compiler can reliably determine whether a match expression (an F# equivalent to the C# switch expression) is exhaustive.
In a future version of C#, there might be an equivalent to a discriminated union from F#. The language design team would like to add this feature in a way that is idiomatic to C# and closer to the already existing concept of inheritance.
The recently released C# 9 brought several new features which can make your code shorter and simpler in certain scenarios. The most prominent new feature is the new record type, along with its supporting features: init-only properties and with expressions. Other features that can contribute to simpler code are new pattern types, target-typed expressions, and top-level programs.
As always, the language is continuously evolving, and the language design team is already thinking about future features. In this article, I covered potential improvements to records, support for static members in interfaces, and discriminated unions.! | https://www.dotnetcurry.com/csharp/simpler-code-with-csharp-9 | CC-MAIN-2022-05 | refinedweb | 3,437 | 51.07 |
On Tue, May 19, 2009 at 3:40 PM, <james at goldwater.org.uk> wrote: > When I use py++ (1.0.0) with indexing_suite_version = 2 to wrap std::set<string> etc, it generates code which doesn't compile out of the box. The code from the boost sandbox puts various definitions in the boost::python::indexing_v2 namespace, and the header files in boost/python/suite/indexing_v2. > > However, py++ generates code that assumes that the sandbox code is in the boost trunk namespace (bp::indexing) and filestructure (boost/python/suite/indexing). It's easy to either fix-up the generated code, or edit the sandbox code and merge it into trunk, but I'd rather keep to published code rather than commit the project to our own version of boost or add another step to the code generation. > > Is there a way of telling py++ to use the different namespace and directory? Not exactly. I suggest you to use current SVN version. It is pretty stable. I hope to release it pretty soon. The main reason is describe here: The short version: Indexing Suite V2 was reworked and now you don't need to patch Boost libraries and Py++ does all the work. > Or am I misunderstanding some things? No, the Py++ you use, assumes that you installed indexing suite V2 to the location you specified. -- Roman Yakovenko C++ Python language binding | https://mail.python.org/pipermail/cplusplus-sig/2009-May/014521.html | CC-MAIN-2016-44 | refinedweb | 231 | 64.71 |
configuration makes it easier to add integrations for new data sources.
Ingest Manageredit
Ingest Manager Ingest Manageredit
Ingest Manager provides a web-based UI for configuring integrations with your data sources. This includes popular services and platforms like Nginx or AWS, as well as many generic input types like log files.
The Elastic Agent configuration allows you to use any number of integrations for data sources. You can apply the Elastic Agent configuration to multiple agents, making it even easier to manage configuration at scale.
When you add an integration, you select the agent configuration to use then configure inputs for logs and metrics, such as the path to your Nginx access logs. When you’re done, you save the integration to update the Elastic Agent configuration. The next time enrolled agents check in, they receive the update. Having the configurations automatically deployed is more convenient than doing it yourself by using SSH, Ansible playbooks, or some other tool.
If you prefer infrastructure as code, you may use YAML files and APIs. Ingest Manager has an API-first design. Anything you can do in the UI, you can also do using the API. This makes it easy to automate and integrate with other systems.
Central management in Fleetedit
You can see the state of all your Elastic Agents on the Fleet page. Here you can see which agents are online, which have errors, and the last time they checked in. You can also see the version of the Elastic.
Data streams make index management easieredit
The data collected by Elastic Agent is stored in indices that are more granular than you’d get by default with Filebeat. This gives you more visibility into the sources of data volume, and control over lifecycle management policies and index permissions. These indices are called data streams.
As you can see in the following screen, each data stream (or index) is broken out by dataset, type, and namespace.
The dataset is defined by the integration and describes the fields and other settings for each index. For example, you might have one dataset for process metrics with a field describing whether the process is running or not, and another dataset for disk I/O metrics with a field describing the number of bytes read.
This indexing strategy solves the issue of having indices with hundreds or thousands of fields. Because each index stores only a small number of fields, the indices are more compact with faster autocomplete. And as an added bonus, the Discover page only shows relevant fields.
Namespaces are user-defined strings that allow you to group data any way you
like. For example, you might group your data by environment (
prod,
QA) or by
team name. Using a namespace makes it easier to search the data from a given
source by using index patterns, or to give users permissions to data by
assigning an index pattern to user roles. Many of our customers already organize
their indices this way, and now we are providing this best practice as a
default.
When searching your data in Kibana, you can use an index pattern to search across all or some of the indices. | https://www.elastic.co/guide/en/fleet/7.9/ingest-management-overview.html | CC-MAIN-2021-43 | refinedweb | 529 | 62.27 |
Python 2 has an end-of-life date set for 2020 and now that most third-party packages support both 2 and 3 we are starting to think about a migration strategy for Mantid.
This is currently only possible on a Linux system with a pre-installed version of python 3. You need to install some additional packages as shown below:
apt-get install python3-sip-dev python3-pyqt4 python3-numpy python3-scipy python3-sphinx \ python3-sphinx-bootstrap-theme python3-dateutil python3-matplotlib ipython3-qtconsole \ python3-h5py python3-yaml
or on fedora, with slightly different package names
dnf install python3-sip-devel python3-PyQt4-devel python3-numpy python3-scipy python3-sphinx \ python3-sphinx-theme-bootstrap python3-dateutil python3-matplotlib python3-ipython-gui \ boost-python3-devel python3-h5py python3-yaml
then set
-DPYTHON_EXECUTABLE=/usr/bin/python3 when running cmake before building.
Warning
If any of these packages are installed via pip, this could cause conflicts. Install as described here only.
Python 3 introduces many exciting new features. For a full description see the official Python 3 changes document. For a shorter overview see here or here.
Some features of Python 3 have been backported to Python 2.x within the __future__ module. These make it easier to write code that is compatible with both versions.
This cheat sheet provides helpful examples of how to write code in a 2/3 compatible manner. Where an
option is given to use either the six or
future (not to be confused with
__future__!) modules
then
six is used.
All new code should be written to be compatible with Python 2 & 3 and as a minimum the first import line of the module should be:
from __future__ import (absolute_import, division, print_function)
It is quite common to also see
unicode_literals in the above import list, however, when running
under Python 2
Boost.Python will not automatically convert a Python
str to C++
std::string
automatically if the string is unicode. When running with Python 3
Boost.Python will do this
conversion automatically for unicode strings so this is in fact not a huge issue going forward.
One way to migrate a file from python 2 to 3 is as follows…
Warning
2to3script you will need to start the command-prompt.bat in the build directory and run
%PYTHONHOME%\Scripts\2to3
Run the following script to run the python 2 to 3 translation tool and rename the file to
filename.py.bak
2to3 --no-diffs -w filename.py mv filename.py{,.bak};
Run one of the following commands to append the import statement listed above.
awk '/(from|import)/ && !x {print "from __future__ import (absolute_import, division, print_function)\n"; x=1} 1' \ filename.py.bak > filename.py
or
sed -i '0,/^import\|from.*/s/^import\|from.*/from __future__ import (absolute_import, division, print_function)\n&/' filename.py
Check each changed block,
xrangewith
rangethen add
from six.moves import rangeto the imports list
ifilterfalsewith
filterfalsefrom
itertoolsthen replace a statement like
from itertools import filterfalsewith
from six.moves import filterfalsein the imports list. There are more cases like this documented here.
for k, v in knights.iteritems()with
for k, v in knights.items()then add
from six import iteritemsto the import list and update the replacement to
for k, v in iteritems(knights).
In some cases like
range, pylint will complain about Replacing builtin ‘range’ or similar.
Make sure to put the proper ignore statement on that line using
#pylint: disable=redefined-builtin.
Check the code still runs as expected in Python 2.
Note
2to3 will try to keep the type of the objects the same. So, for example
range(5) will
become
list(range(5)). This is not necessary if you use it just for iteration. Things like
for i in range(5) will work in both versions of Python, you don’t need to transform it into a
list. | http://developer.mantidproject.org/Python3.html | CC-MAIN-2018-47 | refinedweb | 639 | 57.37 |
I have another usecase:
For a CD pipeline we do the following:
1. build the application
2. deploy it to a container
3. run integration tests
Step 3 has a pom.xml with packaging type pom and has a couple of plugins
specified (e.g. jmeter run + analyze + report)
In this case there are a lot of projects which make use of this pipeline,
so we want to have a parent-pom, so the it projects only have to specify
project specific properties.
in the parent you need to specify the plugins with pluginManagement, since
you don't want to execute them when building the parent.
However, this means that every it-project must add the plugin to their pom
just to be picked up as part of the lifecycle.
In this case you're actually looking for something which is the opposite
of <inherit>, which means that the child project inherits it or not. The
opposite would always be inherited by the child-module, but its value
tells if this parent-project uses the configuration or not.
Or it would be nice if the pom could define the lifecycle *without* the
use of an extension, that would be overkill. The only thing that needs to
be done is to have some configuration which binds goals to phases, nothing
more.
I don't think that this fits in the 4.0.0 model and you might wonder if it
belongs there. One solution I can think of is an attachment to the
parent-project with a 'lifecycles'-classifier. It would be an xml
containing the some kind of configuration which looks like the one we're
already using in the components.xml
Robert
On Mon, 08 Feb 2016 23:33:54 +0100, Stephen Connolly
<stephen.alan.connolly@gmail.com> wrote:
> So I was thinking somewhat about the issues with custom lifecycles.
>
> One of the nice things I like about Maven is that the use of the standard
> lifecycles helps orientate new developers and prevents the sprawl of ANT
> targets.
>
> When I look at all the other build systems, what I keep missing in them
> is
> the standard lifecycle. Every time you land on a project you end up
> spending altogether far too much time trying to figure out the special
> incantations required to build the project... Is it `ant clean build`, is
> it `ant distclean dist`, etc. And this is not just an ANT issue, it
> affects
> all the build systems from make onwards.
>
> Now the thing is that Maven actually supports custom lifecycles, so I can
> create a custom lifecycle with a custom list of phases using whatever
> names
> I decide... The reason people don't do this is because it's seen as hard
> to
> do...
>
> There is that quote: "Nothing is either good or bad, but thinking makes
> it
> so"
>
> By being perceived as hard to do, custom lifecycles have resulted in a
> solid set of well defined phases...
>
> On the other hand, people end up abusing the standard lifecycle in order
> to
> do lifecycle like things... Has anyone seen people using special profiles
> coupled with plugins bound to early lifecycle phases to do non-build
> related things? I know I have been guilt of this... After all
> `initialize`
> and `validate` are generally no-op phases and if you use `<defaultGoal>`
> in
> the profile you can achieve quite a lot... Except now the old problem is
> back... How do I start up a local test environment: `mvn
> -Pcreate-test-env
> -pl :test-env`... Well that's non-obvious to discover... And it's not
> very
> portable either... In fact give me 3 months away and I'll probably have
> forgotten how to do it myself...
>
> So much nicer would be to actually start using custom lifecycles...
>
> First off, let's say we define a deployment lifecycle with goals to
> assist
> shipping the artifacts to deployment environments. So you want to type
> `mvn
> ship -Pproduction` to ship the artifacts to production. In a multi module
> project this will only work if all modules in the project have the
> extension with this custom lifecycle in scope... So good luck to you if
> you
> have to pull in some projects to your reactor where you only have read
> access to SCM and they don't use your extension... You used to end up
> needing a custom distribution of Maven with the extension pre-loaded...
>
> This is somewhat easier with the .mvn directory... As we can ensure the
> extension defining the custom lifecycle is loaded for all projects in the
> repository...
>
> But here is my problem:
> What happens when there are two extensions defining different
> lifecycles with the same phase names?
>
> So I've added my extension which defines the `ship` phase and there's
> also
> another project in the reactor with an extension which has defined a
> different lifecycle which also has the phase `ship`... Who wins?
>
> Well the current answer to who wins is: "first in the reactor wins"
>
> So if I have the .mvn directory loading up the custom extension and the
> other project is second in the reactor then my `ship` will win... But if
> the other project is first in the reactor then that project's `ship` may
> win... And then the build will fail as on the second reactor module that
> lifecycle does not exist.
>
> So it seems obvious to me that we need to provide a way to namespace
> lifecycle phases... So that
>
> `mvn default::deploy` and `mvn deploy` are logically the same as long as
> only the "default" build lifecycle defines a "deploy" phase. The same
> would
> work for `mvn clean::clean` vs `mvn clean` and `mvn site::site-deploy` vs
> `mvn site-deploy` for the "clean" and "site" lifecycles respectively
>
> The second nice thing about namespacing lifecycles is that we can
> automatically trim the project list based on those projects that define
> the
> lifecycles...
>
> So then
>
> `mvn ship::ship`
>
> will only operate on those projects that actually have an extension that
> defines the ship lifecycle...
>
> If we take this further we may also need to ack that we have no control
> over the extensions that define lifecycles with specific IDs... So you
> may
> need to further qualify the lifecycle... `mvn
> groupId:artifactId:version::lifecycleId::phase` being the fully specified
> phase
>
> What do people think? Is this something to consider? Will I file a JIRA
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org | http://mail-archives.apache.org/mod_mbox/maven-dev/201604.mbox/%3Cop.ygqyz90akdkhrr@desktop-2khsk44.dynamic.ziggo.nl%3E | CC-MAIN-2019-26 | refinedweb | 1,081 | 62.38 |
Uncertainty in an integral equation
Posted July 10, 2013 at 09:05 AM | categories: uncertainty, math | tags: | View Comments
In a previous example, we solved for the time to reach a specific conversion in a batch reactor. However, it is likely there is uncertainty in the rate constant, and possibly in the initial concentration. Here we examine the effects of that uncertainty on the time to reach the desired conversion.
To do this we have to write a function that takes arguments with uncertainty, and wrap the function with the uncertainties.wrap decorator. The function must return a single float number (current limitation of the uncertainties package). Then, we simply call the function, and the uncertainties from the inputs will be automatically propagated to the outputs. Let us say there is about 10% uncertainty in the rate constant, and 1% uncertainty in the initial concentration.
from scipy.integrate import quad import uncertainties as u k = u.ufloat((1.0e-3, 1.0e-4)) Ca0 = u.ufloat((1.0, 0.01))# mol/L @u.wrap def func(k, Ca0): def integrand(X): return 1./(k*Ca0)*(1./(1-X)**2) integral, abserr = quad(integrand, 0, 0.9) return integral sol = func(k, Ca0) print 't = {0} seconds ({1} hours)'.format(sol, sol/3600)
t = 9000.0+/-904.488801332 seconds (2.5+/-0.251246889259 hours)
The result shows about a 10% uncertainty in the time, which is similar to the largest uncertainty in the inputs. This information should certainly be used in making decisions about how long to actually run the reactor to be sure of reaching the goal. For example, in this case, running the reactor for 3 hours (that is roughly + 2σ) would ensure at a high level of confidence (approximately 95% confidence) that you reach at least 90% conversion.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/07/10/Uncertainty-in-an-integral-equation/ | CC-MAIN-2017-39 | refinedweb | 313 | 57.37 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
printing "sale.order" fields in account.invoice pdf (qweb)
My Goal is to print fields of the "sale.order" into the invoice pdf.
Normaly I use
<span t-
(invoice_line for example).
and nest it like this for example:
<span t-
That mostly work for account.invoice and all kinds of connected fields.
Is there a way to "nest to" the informations of the sale.order which Im missing? Or is there maybe a way not to start from "o"?
I found the solution. As said, there isnt any connection to the sale.order so I build a module to make this connection:
from openerp.osv import fields, osv."),
}
now its possible to nest the fields like this in the invoice pdf:
<span t-
Thank you for the explanation. Can you please show
- how this code would look like when I want to access the sales.order module not from accounts but from the Manufacturing Orders module in a qweb report.
- where to put the code. Do I need to create a new file or can I place this into a specific file? Please explain where to copy this file into. | https://www.odoo.com/forum/help-1/question/printing-sale-order-fields-in-account-invoice-pdf-qweb-93678 | CC-MAIN-2017-09 | refinedweb | 219 | 78.35 |
csPoly2D Class Reference
[Geometry utilities]
The following class represents a general 2D polygon. More...
#include <csgeom/poly2d.h>
Detailed Description
The following class represents a general 2D polygon.
Definition at line 40 of file poly2d.h.
Constructor & Destructor Documentation
Make a new empty polygon.
Copy constructor.
Destructor.
Member Function Documentation
Add a vertex (2D) to the polygon.
Return index of added vertex.
Clipping routines.
They return false if the resulting polygon is not visible for some reason. Note that these routines must not be called if the polygon is not visible. These routines will not check that. Note that these routines will put the resulting clipped 2D polygon in place of the original 2D polygon.
This routine is similar to Intersect but it only returns the polygon on the 'right' (positive) side of the plane.
Extend this polygon with another polygon so that the resulting polygon is: (a) still convex, (b) fully contains this polygon, and (c) contains as much as possible of the other polgon.
'this_edge' is the index of the common edge for this polygon. Edges are indexed with 0 being the edge from 0 to 1 and n-1 being the edge from n-1 to 0.
Calculate the signed area of this polygon.
Test if this vector is inside the polygon.
Test if a vector is inside the given polygon.
Intersect this polygon with a given plane and return the two resulting polygons in left and right.
This version is robust. If one of the edges of this polygon happens to be on the same plane as 'plane' then the edge will go to the polygon which already has most edges. i.e. you will not get degenerate polygons.
Initialize the polygon to empty.
Make room for at least the specified number of vertices.
Assignment operator.
Member Data Documentation
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsPoly2D.html | CC-MAIN-2014-15 | refinedweb | 326 | 67.15 |
Programming Java for Absolute Beginners
Introduction: Programming Java for Absolute Beginners
Hello there! My guess is that if you've stumbled across this instructable you want to learn to program using java! (Or at least I hope so.) Anyway, say you know absolutely nothing about java (Or close to nothing, or you just want to learn more) then this instructable is perfect for you! If you enjoy this instructable, please leave some feedback! I also have a website now! Check it out here: homebrewbundle.com
In this instructable I will:
-Help you download, and install the required tools
-Teach you the very basics of syntax, and some code
-Help you create an actual program
Now, I will be entering this in the Make-To-Learn Youth contest, so I am going to answer some questions
What did you make?
I created a short, but useful guide to getting started with programming java. I started by learning it myself, then decided to teach others.
How did you make it?
I used Java netbeans to create a program, then copied the code and taught how it works.
Where did you make it?
I made this project at my house. But I learned to program at a local computer science club.
What did you learn?
Before making this project, I taught myself about java, and got accustomed to Netbeans' layout.
If you get stuck, or your program isn't working, you can comment below with your question. Ready? Let's go!
Step 1: Downloading the Tools
Step 2: Understanding the IDE
When Netbeans opens you will be greeted by the IDE. Now, I know it may seem like a lot to take in, but let me break it down.
At the top of the screen you should see many different icons, the first is "New File" You can use this to create classes and other files, but we will get into this more later. The second icon you will probably use the most, this is the "New Project" icon. The next icon is the "Open Project" icon. You can use this if you made an application on a different computer, and want to open it on a different computer. The next is "Save All", in Java Netbeans you can have multiple projects open at one time, and this allows you to save them all at once. Next is "Undo" and "Redo", they are pretty self-explanatory. The next five are "Build" which compiles your project so you can create an executable of it (A distributable file). Next is "Clean and Build" This cleans all the files in your project. After that is "Run" use this to Run your projects code. Those are the only icons you really need to know. On the right you should see a projects tab, this is where you can access the files of all your applications. Now, that you understand a little more about this IDE, let's get started making this application!
Step 3: Building Your Application
Let's start with a very simple application, the famous Hello World application will do. Now, what we are going to do is have the computer say "Hello World!". To do this create a new project, Click "Next", and call the Project "HelloWorld", leave everything else default and click "Finish". Now, You should be greeted with this code:
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package helloworld;
/**
*
* @author Your name here
*/
public class HelloWorld {
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
}
}
Highlight "// TODO code application logic here" and type System.out.println("Hello World!");
Now, I'm going to explain what we just did. We just told the system to print out "Hello World!" ln means line, so it will print all the words on the same line, try taking it out and see what happens! If you want some more advanced techniques try typing "sout" and press the tab key.
Step 4: Wrapping Up
I really hope you enjoyed this instructable and took something away from it. Please leave some feedback below if you enjoyed, I really appreciate it! Feel free to check out my other projects if you would like to know a little bit more advanced code. If you would like to see more, just leave a comment. Have a nice day!
Thanks sharing java topic which helps beginners..
Thanks! It helped a lot because I didn't know anything :D!
Thanks for your informative article. Java is
most popular programming language used for creating rich enterprise, desktop
and web applications. Keep on updating your blog with such informative post.
my link works
it(The link) does not work. plz change it.
great thanks so much
hi, i downloaded the tools but it seems it doesn't works. please help! thanks!
Very helpful writeup! Thanks. | http://www.instructables.com/id/Programming-Java-for-Absolute-Beginners-Lesson-1/ | CC-MAIN-2017-43 | refinedweb | 815 | 82.14 |
Although, I use both vectors and dynamic arrays interchangeably. Depends on what I'm doing/using it for.
Printable View
>> Doesn't mean they're hard to use or implement.
Implementing a full-fledged vector is not as trivial as you may think. Give it a try if you doubt me. ;)
But something like...
... is simple.... is simple.Code:
//may the code gods have mercy if this is wrong :)
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector<int> MyVector;
int i;
for(i = 0; i < 10; i++)
MyVector.push_back(i);
for(i = 0; i < 10; i++)
cout<< MyVector[i] << endl;
return 0;
}
I have a follow up question.
Say, I have the 1G array of doubles. In my application I know for sure that all doubles will always be non-negative.
Now, if I needed another array of the same length containing either 1 or 0, I would have to declare another array of integers. Could I save memory by using the sign-bit (leading bit?) of the double to store the bits? Or would this end in a mess?
SK
It would seem a bit messy.
A smaller datatype for holding ones and zeros would be char. Even smaller should be std::bitset, but then the number of ones and zeros to store should be known at compile time.
If that is unknown the most compact representation would be the std::vector<bool> specialization (but that is not recommended since it works differently from other types of vector).
If you don't mind getting boost, you could also try boost::dynamic_bitset.
EDIT: maybe one has to reverse the order in BIT_TESTEDIT: maybe one has to reverse the order in BIT_TESTCode:
// BIT MANIPULATIONS
#define BOOL(x) ( !(!(x)) )
#define BIT_TEST( arg , pos ) BOOL( (arg)&(1L << (pos)) )
#define BIT_FLIP( arg , pos ) ( (arg)^(1L << (pos)) )
// suppose you want an array with 1's in the 3rd and 10th positions
unsigned int n = 0;
n = BIT_FLIP( n , 3 );
n = BIT_FLIP( n , 10 );
// read the digits using BIT_TEST, e.g.
for ( i = 10; i--; ) { printf( "%u\n" , BIT_TEST( n , i ) ); }
And with bitset, the same code without any macros would be
The difference is that you are not limited with the size of one unsigned (bitset uses an array of unsigneds for storage).The difference is that you are not limited with the size of one unsigned (bitset uses an array of unsigneds for storage).Code:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> n;
n.flip(3);
n.flip(10);
for ( int i = 11; i--; ) {
std::cout << n.test(i) << '\n';
}
} | http://cboard.cprogramming.com/cplusplus-programming/116855-max-array-size-question-2-print.html | CC-MAIN-2015-06 | refinedweb | 430 | 72.36 |
Integrating Sencha Animator Result with Sencha Touch
Hi,
Can the results of Sencha Animator be shown in Sencha Touch application ?
If it is possible, is there any tutorial to do this ?
Thanks.
Yeah, that should be possible.
You can use an iframe to refer to the exported animation.
As for learning more about Sencha Touch, you can find more info in the learn section and the Sencha Touch forums.
Something like this might work:
Code:
var animationPanel = new Ext.Panel({ fullscreen: true, html: '<iframe src="path/to/export.html" width="300" height="300"></iframe>' });
Integrating animator without an iframe
Is there another Sencha Touch way to do this without an iframe?
Some people have mentioned using Ext.Ajax.request but I can't find any working examples of this.
Thank you,
Nigel
If you don't mind getting your hands dirty, you could copy the html and the css (and js if needed) manually. Note that this might lead to some namespace conflicts and you need to go over the css rules to make sure they don't affect other elements on your page (in particular there is a ol and li rule that is currently global).
We're planning to make this easier in the future! | https://www.sencha.com/forum/showthread.php?142785-Integrating-Sencha-Animator-Result-with-Sencha-Touch&p=657441&viewfull=1 | CC-MAIN-2015-35 | refinedweb | 207 | 64.91 |
Reverse String Ignoring Special Characters
October 30, 2015
Today’s exercise solves a homework problem for some lucky reader:
Given an input string that contains both alphabetic characters and non-alphabetic characters, reverse the alphabetic characters while leaving the non-alphabetic characters in place. For instance, given the input string
a!b3c, return the output string
c!b3a, where the non-alphabetic characters
!and
3are in their original positions.
Your task is to write a program that reverses a string while leaving non-alphabetic characters in place. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Two solutions in scala.
The first works with the indexes -> complicated, slow
The second is a faster tail recursive solution
Here’s some C++:
And some Haskell:
Haskell:
In Python.
Standard ML solution.
Here is a Common Lisp solution:
As you can see, it’s very simple. We just use a “generic” function that is not restricted to strings, or vectors, but that will work on any sequence (list, vectors, strings), and using any predicate to select the elements to be reversed.
We’ll use Common Lisp generic functions with methods to provide the variants for the various sequence subclasses (and since string is a subclass of vector, it’ll be included in the method for vector).
Also, we provide two versions of the generic function, one destructive and one non-destructive. nreverse-elements-such will reverse the sequence in place, while reverse-elements-such will create a new sequence. Of course, we will reuse the former to implement the later.
I’m learning Julia, at v0.4.0 now. Strings are stored as UTF-8, character length in a string varies, in-place mutation makes no sense. A vector of valid indices is neatly reversed using a bit vector to mark the positions of the alphabetic indices. The resulting index gives the result.
Another Haskell solution. Similar to the Lisp version, we write a reversal function that takes a predicate indicating which elements to reverse and which to leave in their original positions. We then use it to define functions to reverse letters of a string and odd numbers in a list of numbers. (Arguably, the revPred function suffers from a round of code golf. It’s a sickness… :-)
Another, more straight-forward solution…
@fisherro: Nice ingenious solution. We can extend your proxy idea to an outer container of iterators over the inner container, with a suitable iterator class that acts as a proxy for the inner elements. Now we can eg. sort all digits as well as reversing just the alphabetic characters:
@matthew Very nice!
I’m not sure if it is ever practical, but I like the fact that you could use this to std::sort a std::list in-place by proxying through a std::vector.
It occurs to me that my second solution is not guaranteed to work. Because std::transform does not guarantees the order in which it transforms the elements. It should use std::for_each instead.
package reversestring;
public class ReverseString {
static String input = “a!b3c”;
public static void main(String[] args) {
int stringLength = input.length();
int currentAlphaindex = 0;
char currentChar = ‘a’;
String letters = “”;
String output = “”;
int replacementCount = 0;
char[] alpha = new char[stringLength];
int[] alphaIndex = new int[stringLength];
char[] fullText = new char[stringLength];
for (int i = 0; i = 0; i–){
letters = letters + alpha[i];
letters = letters.trim();
}
for(int i = 0; i = 0)){
fullText[alphaIndex[i]] = letters.charAt(replacementCount);
replacementCount++;
}
output = output + fullText[i];
}
System.out.println(“input: ” + input + “\noutput: ” + output);
}
}
Sorry guys, I stuffed the indentation on that one.
attempt number 2
@fisherro: Thanks. I liked that possible use too. Incidentally, I think my iterator operator++ and operator– should return a *this reference rather than a new iterator (and I don’t think the template template parameter is necessary for the inner container).
I wonder if we could define an inner container that iterates over the contents of a UTF-8 string.
Reblogged this on anutech.
@matthew
A UTF-8 iterator might be tricky. My first instinct is to use std::string as the value_type. But the specification wants an iterator’s operator* to return a reference. (And operator-> requires returning a pointer.) Maybe it could be faked with a proxy object?
I think using char32_t has the same issue.
A string_view might work, though. It can point directly to the original bytes.
@matthew
Here’s a rough first pass of an iterator adapter for walking the characters in a UTF-8 string. It would be better as a container adapter. Well…probably a lot about it could be better. ^_^
Yes, looks tricky wrapping up UTF-8 in an iterable class. I was musing on the idea of dealing with UTF-8 in-place, and came up with this. It uses the well-known trick for reversing a sequence of blocks in memory by reversing each block individually, then the whole sequence. Seems a waste of electrons to do the full reverse when the outer blocks are the same size so we deal with that specially:
The output is quite pretty:
Probably doesn’t deal with combining characters properly though… | https://programmingpraxis.com/2015/10/30/reverse-string-ignoring-special-characters/ | CC-MAIN-2017-51 | refinedweb | 875 | 57.16 |
Introduction
In this Django Heroku guide, I will talk about how to deploy Django project to Heroku using Docker.
So after you read it, you will get:
The difference between Heroku Buildpacks and Heroku Container.
How to serve Django static assets and media files in Heroku.
How to test Docker image for Django project in local env.
How to build front-end stuff for Django project when deploying.
The source code of this tutorial is django-heroku. I would appreciate that if you could give it a star.
Heroku Buildpacks and Heroku Container
There are mainly two ways for you to deploy Django project to Heroku
One way is the
buildpacks.
Buildpacks are responsible for transforming deployed code into a slug, which can then be executed on a dyno
You can see
Buildpacks as some
pre-defined scripts maintained by Heroku team which deploy your Django project. They usually depend on your programming languages.
Buildpacks is very easy to use so most Heroku tutorial would like talk about it.
Another way is using
Docker.
Docker provides us a more flexible way so we can take more control, you can install any packages as you like to the OS, or also run any commands during the deployment process.
For example, if your Django project use NPM as front-end solution (this is popular now), you want to
npm install some dependency packages during the deployment process or wnat to run custom npm build command, Docker seems more clean solution even
buildpacks can do this. (buildpacks can also do this but still have some limitations)
What is more,
Docker let you deploy project in a way most platforms can support, it can save time if you want to migrate it to other platforms such as AWS, Azure in the future.
Build manifest and Container Registry
Some people are confused about
Build manifest and
Container Registry in Heroku Docker doc. So let me explain here.
Container Registry means you build docker in local, and then push the image to Heroku Container Registry. Heroku would use the image to create a container to host your Django project.
Build manifest means you push Dockerfile and Heorku would build it, run it in standard release flow.
What is more,
Build manifest support some useful Heorku built-in features such as
Review Apps,
Release.
If you have no special reason, I strongly recommend using
Build Manifest way to deploy Django.
Docker Build vs Docker Run
So what is the difference between
Docker build and
Docker run
Docker build builds Docker images from a Dockerfile.
A
Dockerfileis a text document that contains all the commands a user could call on the command line to build an image.
Docker run create a writeable container layer over the specified image,
So we should first use
docker build to build the docker image from
Dockerfile and then create container over the image.
Step1: Start to write Dockerfile
Now you already have basic understanding of the Heroku docker, so now let's learn more about
Dockerfile,
Here I would use
django_heroku as an example to show you how to write Dockerfile.
# Please remember to rename django_heroku to your project directory name FROM python:3.6-stretch # WORKDIR sets the working directory for docker instructions, please not use cd WORKDIR /app # sets the environment variable ENV PYTHONUNBUFFERED=1 \ PYTHONPATH=/app \ DJANGO_SETTINGS_MODULE=config.settings.production \ PORT=8000 \ WEB_CONCURRENCY=3 EXPOSE 8000 # Install operating system dependencies. RUN apt-get update -y && \ apt-get install -y apt-transport-https rsync gettext libgettextpo-dev && \ curl -sL | bash - && \ apt-get install -y nodejs &&\ rm -rf /var/lib/apt/lists/* # start to compile front-end stuff WORKDIR django_heroku/static_src # Install front-end dependencies. COPY ./django_heroku/static_src/package.json ./django_heroku/static_src/package-lock.json ./ RUN npm install # Run custom npm commadn to compile static assets such as js, SCSS COPY ./django_heroku/static_src/ ./ RUN npm run build:prod # Install Gunicorn. RUN pip install "gunicorn>=19.8,<19.9" # start to install backend-end stuff WORKDIR /app # Install Python requirements. COPY requirements.txt . RUN pip install -r requirements.txt # Copy application code. COPY . . # Install assets RUN python manage.py collectstatic --noinput --clear # Run application CMD gunicorn config.wsgi:application
Please note that
WORKDIRdocker instruction, it would sets the working directory for docker instructions (RUN, COPY, etc.) It is very like the
cdcommand in shell, but you should not use
cdin Dockerfile.
Since you already know what is
docker runand
docker build, I want to say nearly all docker instructions above would be executed in
docker build. The only exception is the last
CMD, it would be excuted in
docker run.
We use
ENVto set the default env variable.
As you can see, I use
RUN npm run build:prodto help me build front-end assets.
If you use
pipenvin your project, you can change the
pip installcommand part.
Step2: Build Docker image_heroku:latest .
Step3: Docker run
If the docker image has been built without error, here we can keep checking in local env
$ docker run -d --name django-heroku-example -e "PORT=9000" -p 9000:9000 django_heroku:latest # Now visits
In
Dockerfile, we use
ENVto set default env variable in
docker run, but we can still use
-e "PORT=9000"to overwrite the env in run command.
-p 9000:9000let us can visits the 9000 port in container through 9000 in host machine.
Let's check the project files now
$ docker exec django-heroku-example ls /app Dockerfile config db.sqlite3 django_heroku manage.py requirements.txt static_root
As you can see, a local
db.sqlite3 is created because we did not set remote DB ENV. I will talk about it in a bit.
After you finish testing, remember to stop and remove the local docker container
$ docker stop django-heroku-example $ docker rm django-heroku-example
Step4: Serving static assets on Heroku
Django only serve media files and static assets in dev mode, so I will show you how to serve them on Heroku in production mode.
To serve static assets, we need to use a 3-party package.
whitenoise.
$ pip install whitenoise
Edit
MIDDLEWARE in
settings/base.py, put it above all other middleware apart from Django’s
SecurityMiddleware:
MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', # ... ]
Set in
settings/production.py
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
That is all the config you need.
After
python manage.py collectstatic --noinput --clearin Dockerfile executed in
docker build, all static assets would be put to
static_rootdirectory.
After
docker runexcecuted on Heroku, Django can serve static assets with the help of
whitenoise
Step5: Serving media files on Heroku
Because all changes to docker container would be lost during redploy. So we need to store our media files to some other place instead of Heroku Docker container.
The popular solution is to use Amazon S3 storage, because the service is very stable and easy to use.
If you have no Amazon service account, please go to Amazon S3 and click the
Login AWS Management Console
In the top right, click your company name and then click
My Security Credentials
Click the
Access Keyssection
Create New Access Key, please copy the
AMAZON_S3_KEYand
AMAZON_S3_SECRETto notebook.
If you are new to Amazon and have no idea what is
IAM user, you can skip it and set permissions later.
Next, we start to create Amazon bucket on S3 Management Console, please copy
Bucket name to notebook.
Bucket in Amazon S3 is like top-level container, every site should have its own
bucket, and the bucket name are unique across all Amazon s3, and the url of the media files have domain like
{bucket_name}.s3.amazonaws.com.
Now let's config Django project to let it use Amazon s3 on Heroku.
$ pip install boto3 $ pip install django-storages
Add
storages to
INSTALLED_APPS in
settings/base.py
Add config below to
settings/production.py
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME') AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY') AWS_S3_FILE_OVERWRITE = False MEDIA_URL = "" % AWS_S3_CUSTOM_DOMAIN DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
To secure your Django project, please set
AWS_STORAGE_BUCKET_NAME,
AWS_ACCESS_KEY_IDand
AWS_SECRET_ACCESS_KEYin env instead of project source code (I will show you how to do it in Heroku in a bit)
AWS_S3_FILE_OVERWRITEplesae set it to False, so this can let the storage handle
duplicate filenamesproblem. (I do not understand why so many blog posts did not mention this)
Step6: Remote DB support
Heroku support many different dbs, you can choose what you like and add it to your Heroku instance. (I recommend
PostgreSQL)
In Heroku, the DB connection string is attached as ENV variable. So we can config our settings in this way.
pip install dj-database-url
Set in
settings/production.py
import dj_database_url if "DATABASE_URL" in env: DATABASES['default'] = dj_database_url.config(conn_max_age=600, ssl_require=True)
Here
dj_database_url would convert
DATABASE_URL to Django db connection dict for us.
Step7: heroku.xml
heroku.yml is a manifest you can use to define your Heroku app.
Please create a file at the root of the directory
build: docker: web: Dockerfile release: image: web command: - django-admin migrate --noinput
As you can see, in
buildstage, docker would build
webimage from the
Dockerfile.
In release stage,
migratecommand would run to help us sync our database.
Step8: Deploy the Django project to Heroku
Now, let's start deploy our Django project to Heorku.
First, we go to Heroku website to login and create a app.
After we create the app, we can get the shell command which can help us deploy the project to Heroku.
Then we start to config and deploy in terminal.
$ heroku login $ git init $ heroku git:remote -a django-heroku-docker $ heroku stack:set container -a django-heroku-docker # git add files and commit $ git push heroku master
heroku stack:set containeris important here because it would tell Heroku to use container instead of
buildpacksto deploy the project.
You can find the domain of your Heroku app in
settingstab. (Heroku has free plan so you can test and learn as you like)
Step9: Add DB add-on
Now you can add db add-on to your Heroku instance, so data of your Django project would be persistent.
Go to the
overviewtab of your Heroku project, click
Configure Add-ons
Search
Heroku Postgresand click
Provisionbutton.
Now go to
settingstab of your Heroku project, click the
Reveal Config Vars
You will see
DATABASE_URL, it is ENV variable of your Heroku project and you can add AWS S3 config to make the Django project can serve media files.
NOTE: Heroku CLI is very powerful, and the
add-on operation can also be done in terminal, considering readers of this post have not much expereince on Heroku, screenshots might be better here.
Tips
You can add code below to your Dockerfile. (put it on top)
ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1
PYTHONDONTWRITEBYTECODEwould make Python not write .pyc file.
PYTHONUNBUFFEREDwould make terminal output more better and not buffered.
You Django project should also work fine even you did not set them in your Dockerfile.
Conclusion
In this Django Heorku tutorial, I talked about how to deploy Django project to Heroku using Docker.
You can find the source code django-heroku. I would appreciate that if you could give it a star.
So what should you go next?
Take a look at how to use NPM as front-end solution with Django
In the next Django Heroku tutorial, I will talk about how to deploy from Gitlab to Heroku and I will also give you some useful tips.
If you still have any question, please feel free to contact us.
Thx. | https://www.accordbox.com/blog/deploy-django-project-heroku-using-docker/ | CC-MAIN-2019-43 | refinedweb | 1,925 | 62.98 |
I enjoy reading and writing. I hope you enjoy at least the former. I have moved my blog to Brendan.Enrick about interfaces. He first shows simply how to create an interface and how to implement the interface implicitly. The difference between implicitly and explicitly writing this code is what creates the different circumstances Joydip shows in his demonstration.
I'll start where he did with an interface. I'll name my interface IBloggable and I'll have a class PostContent which implements the interface.
public interface IBloggable
{
void SendToBlog();
}
public class PostContent : IBloggable
{
public void SendToBlog()
{
// Send this content to a blog
}
}
Ok so this is the implicit implementation of the interface. Basically all I mean when I say explicitly and implicitly is whether or not you're going to define everything. If I define something explicitly I have defined everything and made it perfectly clear. I have included every detail required for precise interpretation. Notice above I have not specified to which class that method applies. So here I will now define the same thing explicitly and make sure that the SendToBlog method specifies to what it belongs.
public class PostContent
{
void IBloggable.SendToBlog()
{
// Send to blog only for the interface not for implementing classes
}
}
As Joydip pointed out so well, the first one will work in either of these circumstances, but our explicitly defined method will only work in the first example.
Example 1:
IBloggable b = new PostContent();
Example 2:
PostContent b = new PostContent();
From this we can note a couple of other interesting points here.
If we implement interface methods implicitly, we are able to use the method with objects of the interface type or of the implementing class type. If we are implementing explicitly, the method will only work on the interface. The great benefit of only allowing instances of the interface to use the method is that it encourages you to write more dynamic and maintainable code since you'll be using the interface everywhere you'll be able to switch your implementing classes quickly and easily.
This is one of many great ways to write better code. I always explicitly implement my interfaces. I've not had it bite me yet, so if anyone knows a reason to not explicitly define this code, please let me know.
One nice feature in Visual Studio which makes this nice and easy. Once you specify the interface you want your class to implement you are able to right click on it and choose Implement Interface > Implement Interface Explicitly and it will create the stubs for everything required in order to implement the interface. It will even place it all in a nice code region for you. Observe the code it generates for us in this instance.
public class PostContent : IBloggable
{
#region IBloggable Members
void IBloggable.SendToBlog()
{
throw new Exception("The method or operation is not implemented.");
}
#endregion
}
Happy coding.
Published Friday, January 04, 2008 4:26 PM
by
Brendan
If you would like to receive an email when updates are made to this post, please register here
RSS
Excellent explanation!
Thanks,
Joydip
Author: ASP.NET Data Presentation Controls Essentials (Packt Publishing)
Joydip Kanjilal
Explicit interface implementation is something I use all the time and it's very useful used in the right way. If you code for your self always using it would be fine, but if your making code that's to be shared, for example some kind of api or class library I'm sure you'd be hanged if implementing all interfaces explicitly. It makes classes a lot more difficult to understand and also it hides the implemented members from intellisense if an instance is not cast to the interface. The notion of always programming against interfaces is good and something you definitely should strive to do, but in the case of an object implementing more than one interface you'd have to do runtime casts to get to members belonging to different interfaces on the same object.
I believe that explicit implementation should be limited to members that makes no sense when the runtime type of an object is known, for example if you implement your own "LinkedList"-class which implements the IList-interface, you'd explicitly implement the IsReadOnly-property since there's no reason to ask if your instance is read only once you know it's a LinkedList and you didn't implement it as a read only list.
Patrik
Patrik,
Thank you. Yes, I like your suggestion here. I definitely agree that if using multiple interfaces it is a good idea to not implement the interfaces explicitly.
You're also correct about for writing APIs since it is not internal code, and you don't want to impede the styles of other people's coding. I believe it is a best practice reserved for internal code.
Thanks for the great comment!
Brendan | http://aspadvice.com/blogs/name/archive/2008/01/04/Explicitly-and-Implicitly-Implementing-Interfaces.aspx | crawl-002 | refinedweb | 814 | 52.9 |
Bug Description
Already posted on the Operator mailing list without answer http://
I've stumbled upon a weird condition in Neutron and couldn't find a bug
filed for it. So even if it is happening with the Kilo release, it could
still be relevant. I've also read the commit logs without finding anything relevant.
The setup has 3 network nodes and 1 compute node currently hosting a virtual
network (GRE based). DVR is enabled. I have just added IPv6 to this network
and to the external network (VLAN based). The virtual network is set to SLAAC.
Now, all four mentioned nodes have spawned a radvd process and VMs are
getting globally routable addresses. Traffic has been statically routed to
the subnet so reachability is OK in both ways.
However, the link-local router address and associated MAC address is the
same in all 4 qr namespaces. About 16% packets get lost in randomly occuring
bursts. Openvswitch forwarding tables are flapping and I think that the
packet loss occurs at the moment when all 4 switches learn the MAC address
from another machine through a GRE tunnel simultaneously. With a second VM on the
network on another compute node, the packet loss is 12%.
Another router address and the external gateway address resides in a snat
namespace, which exists in only one copy. When I tell the VM to route
through that, there is no packet loss. My best solution for this so far is
by passing a script to the VM through user-data that changes the gateway and
adds a rc script to do the same on reboot.
Is there any way to change the behavior to get rid of the MAC address
conflict? I have determined that pushing a host route to the VMs is not supported
for IPv6. Therefore, the workaround is not feasible if uninformed users will be
launching VMs.
Hi!
I still haven't upgraded to Mitaka, but I have some more insight into this. It also affects IPv4.
Story:
A customer complained about connectivity issues. Pings to his instance had about 2% packet loss. I have spied into the forwarding table of OpenVSwitch on a compute node with DVR:
watch --differences=
..where the MAC belongs to the .1 address of the distributed router. The one that exists on every compute and network node.
From time to time, the port number jumped there and back again. This coincided with the lost pings.
I thought that enabling l2population could solve the issue, but alas, that only populates the br-tun bridge, not br-int. (!)
Then I ran tcpdump in the router namespace and on the instance's iptables bridge
ip netns exec qrouter-
tcpdump -i qbre6b1046f-7c -ln ether host fa:16:3e:a5:d8:e7
a) The ping requests showed on both, the reply was missing only in the router namespace.
b) Around the time of the lost ping, I saw a connection attempt to another IP address (TCP Syn), even on the instance's bridge. It was flooded from another compute node, flipping the switching table of OpenVSwitch and causing packets from all nodes in the cloud to go to the node that did the broadcast for a short time.
Steps to reproduce:
1. On a compute node cmp01, run an instance and start pinging its floating IP from the outside (not a requirement, but the traffic needs to pass through DVR).
2. On a compute node cmp02, run an instance and then stop it (shutoff state).
Ping the floating IP of the shutoff instance.
3. Observe the flooded packets, flipping switching table and packet loss.
My original bug report and this one are closely related. Both are caused by duplicate MAC addresses of the router in the DVR model. l2population does not save day as the conflict happens on br-int, not br-tun.
Is this a design flaw of DVR?
..the cause that the packet loss was 12% with IPv6 vs. only 2% in IPv4 is that with IPv6, there is radvd running and broadcasting on its own. With IPv4, it required this concrete condition to trigger.
Reading the article https:/
The dvr_host_macs MySQL table is empty. That's why I have MAC address collisions.
Why could the table be empty? Who should populate it?
I've probably found the problem. On the compute nodes, I did not have
enable_
In the [agent] section of the ml2_conf.ini. As a result, the mechanism that prevents MAC address conflicts was disabled. It is interesting that it worked that good without it.
So you are seeing this on Kilo? If so, can you reproduce it with Mitaka? There have been a number of bug fixes for IPv6 and DVR and this problem could be fixed already. | https://bugs.launchpad.net/neutron/+bug/1596473 | CC-MAIN-2020-50 | refinedweb | 794 | 74.39 |
Uses File::Copy::Recursive, but wedges another 'copy' sub so that a progress bar, or some other hook, can be displayed or run.
update:
The real trick to this particular snippet is determining that File::Copy::Recursive uses File::Copy::copy, but the copy sub is imported into the File::Copy::Recursive namespace rather than its own namespace. If you try to hook File::Copy::copy, it will not work.
For completeness, thank you jdporter, here is what it would look like if Hook::LexWrap was used:
use Hook::LexWrap;
use File::Copy::Recursive qw(dircopy);
use strict;
use vars qw($dir_from $dir_to);
$dir_from = "/tmp/from";
$dir_to = "/tmp/to";
$|=1;
# Using Hook::LexWrap
my @dirs;
wrap *File::Copy::Recursive::copy,
pre => sub { @dirs = @_ },
post => sub { printf "copying %s to %s. \r", @dirs };
dircopy($dir_from, $dir_to);
print "\n";
[download].
--hiseldlWhat time is it? It's Camel Time!
Tron
Wargames
Hackers (boo!)
The Net
Antitrust (gahhh!)
Electric dreams (yikes!)
Office Space
Jurassic Park
2001: A Space Odyssey
None of the above, please specify
Results (106 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=702192 | CC-MAIN-2014-35 | refinedweb | 177 | 65.83 |
Library sample chapters
Using Managed DirectX to Write a Game
Choosing the Game
This..
Ive just started game development , this code is really helpful to me and i think to others also which wants to start develoment in this adventures field. Thanks for sharing this article. Ive learnt so many things from this article although i was enable to give it the final touch due to .x and textures file (im requesting Author to plz put a link for these file, so that me and other people can take help)
thanks a lot for this very useful article
Ankit Nagpaljavascript:smilie(':cool:')
C# Programmer
New Delhi
Your tutorial sounds great. Please include what kind of namespaces must be used in this project.
This thread is for discussions of Using Managed DirectX to Write a Game. | http://www.developerfusion.com/samplechapter/4387/using-managed-directx-to-write-a-game/ | crawl-002 | refinedweb | 135 | 69.11 |
Tests can hit complete coverage but fail to communicate. Kevlin Henney reminds us that assertions should be necessary, sufficient, and comprehensible.
It is important to test for the desired, essential behaviour of a unit of code, rather than for the incidental behaviour of its particular implementation. But this should not be taken or mistaken as an excuse for vague tests. Tests need to be both accurate and precise.
Something of a tried, tested, and testing classic, sorting routines offer an illustrative example – they are to computer science as fruit flies are to genetics. Implementing sorting algorithms is far from an everyday task for a programmer, commodified as they are in most language libraries, but sorting is such a familiar idea that most people believe they know what to expect from it. This casual familiarity, however, can make it harder to see past certain assumptions.
A test of sorts
When programmers are asked, ‘What would you test for?’, by far and away the most common response is something like, ‘The result of sorting a sequence elements is a sorted sequence of elements.’ As definitions go, it’s perhaps a little circular, but it’s not false.
So, given the following C function:
void sort(int values[], size_t length);
and some values to be sorted:
int values[length];
the expected result of sorting:
sort(values, length);
would pass the following:
assert(is_sorted(values, length));
for some appropriate definition of
is_sorted.
While this is true, it is not the whole truth. First of all, what do we mean when we say the result is ‘a sorted sequence of elements’? Sorted in what way? Most commonly, a sorted result goes from the lowest to the highest value, but that is an assumption worth stating explicitly. Assumptions are the hidden rocks many programs run aground on – if anything, one goal of testing and other development practices is to uncover assumptions rather than gloss over them.
So, are we saying they are sorted in ascending order? Not quite. What about duplicate values? We expect duplicate values to sort together rather than be discarded or placed elsewhere in the resulting sequence. Stated more precisely, ‘The result of sorting a sequence of elements is a sequence of elements sorted in non-descending order.’ Non-descending and ascending are not equivalent.
Going to great lengths
When prompted for an even more precise condition, many programmers add that the resulting sequence should be the same length as the original. Although correct, whether or not this deserves to be tested depends largely on the programming language.
In C, for example, the length of an array cannot be changed. By definition, the array length after the call to
sort will be the same as it was before the call. In contrast to the previous point about stating and asserting assumptions explicitly, this is not something you should or could write an assertion for. If you’re not sure about this, consider what you might write:
const size_t expected = length; sort(values, length); assert(length == expected);
The only thing being tested here is that the C compiler is a working C compiler. Neither
length nor
expected will – or can – change in this fragment of code, so a good compiler could simply optimise this to:
sort(values, length); assert(true);
If the goal is to test
sort, this truism not particularly helpful. It is one thing to test precisely by making assumptions explicit; it is another to pursue false precision by restating defined properties of the platform.
The equivalent
sort in Java would be
... void sort(int[] values) ...
And the corresponding tautologous test would be
final int expected = values.length; sort(values); assert values.length == expected;
Sneaking into such tests I sometimes also see assertions along the lines of
assert values != null;
If the criteria you are testing can’t be falsified by you, those tests have little value – hat tip to Karl Popper:
In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality.
This is not to say you will never encounter compiler, library, VM, or other platform bugs, but unless you are the implementer of the compiler, library, VM, or other platform, these are outside your remit and the reality of what you are testing.
For other languages or data structure choices, that the resulting length is unchanged is a property to be asserted rather than a property that is given. For example, if we chose to use a
List rather than an array in Java, its length is one of the properties that could change and would, therefore, be something to assert had remained unchanged:
final int expected = values.size(); sort(values); assert values.size() == expected;
Similarly, where sorting is implemented as a pure function, so that it returns a sorted sequence as its result, leaving the original passed sequence untouched, stating that the result has the same length as the input makes the test more complete. This is the case with Python’s own built-in
sorted function and in functional languages. If we follow the same convention for our own
sort, in Python, it would look like
result = sort(values) assert len(result) == len(values)
And, unless we are already offered guarantees on the immutability of the argument, it makes sense to assert the original values are unchanged by taking a copy for posterity and later comparison:
original = values[:] result = sort(values) assert values == original assert len(result) == len(values)
The whole truth
We’ve navigated the clarity of what we mean by sorted and questions of convention and immutability… but it’s not enough.
Given the following test code:
original = values[:] result = sort(values) assert values == original assert len(result) == len(values) assert is_sorted(result)
The following implementation satisfies the postcondition of not changing its parameter and of returning a result sorted in non-descending order with same length as the original sequence:
def sort(values): return list(range(len(values)))
As does the following:
def sort(values): return [0] * len(values)
And the following:
def sort(values): return [] if len(values) == 0 else [values[0]] * len(values)
Given the following sequence:
values = [3, 1, 4, 1, 5, 9]
The first example simply returns an appropriately sized list of numbers counting up from zero:
[0, 1, 2, 3, 4, 5]
The second example makes even less of an effort:
[0, 0, 0, 0, 0, 0]
The third example at least shows willing to use something more than just the length of the given argument:
[3, 3, 3, 3, 3, 3]
This last example was inspired by an error taken from production C code (fortunately caught before it was released). Rather than the contrived implementation shown here, a simple slip of a keystroke or a momentary lapse of reason led to an elaborate mechanism for populating the whole result with the first element of the given array – an i that should have been a j converted an optimal sorting algorithm into a clunky fill routine.
All these implementations satisfy the spec that the result is sorted and the same length as the original, but what they let pass is also most certainly not what was intended! Although these conditions are necessary, they are not sufficient. The result is an underfitting test that only weakly models the requirement and is too permissive in letting flawed implementations through.
The full postcondition is that ‘The result of sorting a sequence of elements is a sequence of the original elements sorted in non-descending order.’ Once the constraint that the result must be a permutation of the original values is added, that the result length is the same as the input length comes out in the wash and doesn’t need restating regardless of language or call convention.
Oracular spectacular
Are we done? Not yet.
Even stating the postcondition in the way described is not enough to give you a good test. A good test should be comprehensible and simple enough that you can readily see that it is correct (or not).
If you already have code lying around that has the same functionality as the functionality you want to test, you can use it as a test oracle. Under the same conditions, the new code should produce the same results as the old code. There are many reasons you may find yourself in this situation: the old code represents a dependency you are trying to decouple from; the new code has better performance than the old code (faster, smaller, etc.); the new code has a more appropriate API than the old code (less error-prone, more type safe, more idiomatic, etc.); you are trying something new (programming language, tools, technique, etc.) and it makes sense to use a familiar example as your testing ground.
For example, the following Python checks our
sort against the built-in
sorted function:
values = ... original = values[:] result = sort(values) assert values == original assert result == sorted(values)
Sometimes, however, the scaffolding we need to introduce makes the resulting test code more opaque and less accessible. For example, to test our C version of
sort against C’s standard
qsort function, we can use the code in Listing 1.
Even making the narrative structure of the test case more explicit, the code still looks to be more about bookkeeping local variables than about testing
sort (see Listing 2).
And
compare_ints is ours to define, so will lie outside the test case, making the test code even harder to assimilate on reading (Listing 3).
Note that this dislocation of functionality is not the same as making test code more readable by extracting clunky bookkeeping code into intentionally named functions (Listing 4).
Refactoring to reduce bulk and raise intention is certainly a practice that should be considered (much, much more often than it is) when writing test code.
Managing expectations
One limitation of testing against an existing implementation is that it might not always be obvious what the expected result is. It is one thing to say, “The new implementation should produce the same results as the old implementation”, but quite another to make clear exactly what those results are. In the case of sorting, this is not much of an issue. In the case of something from a more negotiated domain, such as insurance quotes or delivery scheduling, it might not be clear what ‘the same results as the old implementation’ entails. The business rules, although replicated in the new implementation, may be no clearer with tests than they were without. You may have regression, but you do not necessarily have understanding.
The temptation is to make these rules explicit in the body of the test by formulating the postcondition of the called code and asserting its truth. As we’ve already seen in the case of something as seemingly trivial as sorting, arriving at a sound postcondition can be far from trivial. But now that we’ve figured it out, we could in principle use it in our test (Listing 5).
Where extracting
is_sorted and
is_permutation make the postcondition clear and the test more readable, but are left as an exercise for the reader to implement. And herein lies the problem: the auxiliary test code – to check that a sequence is sorted and that one sequence contains a permutation of values in another – may quite possibly be more complex than the code under test. Complexity is a breeding ground for bugs.
Details, details
One response to the evergreen question, “How do we know that our tests are correct?”, it to make the test code significantly simpler. Tony Hoare pointed out that
There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies and the other is to make it so complicated that there are no obvious deficiencies.
If the tests are significantly simpler than the code being tested, they are also more likely to be correct. And when they are incorrect, the errors are easier to spot and fix.
The solution to the problem has been staring us in the face. Alfred North Whitehead observed that
We think in generalities, but we live in detail.
Using concrete examples eliminates this accidental complexity and opportunity for accident. For example, given the following input:
[3, 1, 4, 1, 5, 9]
The result of sorting is the following:
[1, 1, 3, 4, 5, 9]
No other answer will do. And there is no need to write any auxiliary code. Extracting the constancy check as a separate test case, the test reduces to the pleasingly simple and direct
assert sort([3, 1, 4, 1, 5, 9]) == [1, 1, 3, 4, 5, 9]
We are, of course, not restricted to only a single example. For each given input there is a single output, and we are free to source many inputs. This helps highlight how the balance of effort has shifted. From being distracted into a time sink by the mechanics and completeness of the auxiliary code, we can now spend time writing a variety of tests to demonstrate different properties of the code under test, such as sorting empty lists, lists of single items, lists of identical values, large lists, etc.
By being more precise and more concrete, the resulting tests will both cover more and communicate more. An understanding of postconditions can guide us in how we select our tests, or our tests can illustrate and teach us about the postconditions, but the approach no longer demands logical completeness and infallibility on our part.
A concrete conclusion
Precision matters. A test is not simply an act of confirmation; it is an act of communication. There are many assertions made in tests that, although not wrong, reflect only a vague description of what we can say about the code under test.
For example, the result of adding an item to an empty repository object is not simply that it is not empty: it is that the repository now has a single entry, and that the single item held is the item added. Two or more items would also qualify as not empty, but would be wrong. A single item of a different value would also be wrong. Another example would be that the result of adding a row to a table is not simply that the table is one row bigger: it’s also that the row’s key can be used to recover the row added. And so on.
Of course, not all domains have such neat mappings from input to output, but where the results are determined, the tests should be just as determined.
In being precise, however, it is easy to get lost in a level of formality that makes tests hard to work with, no matter how precise and correct they are. Concrete examples help to illustrate general cases in an accessible and unambiguous way. We can draw representative examples from the domain, bringing the test code closer to the reader, rather than the forcing the reader into the test code.
This article was previously published online at: A shorter version was also published as a chapter in 97 Things Every Programmer Should Know, which is collection of short and useful tips for programmers.
As well as contributing a chapter, Kevlin edited the book. You may recognise some of the other contributors to the book from the ACCU journals. Michael Feathers, Pete Goodliffe, Seb Rose, Allan Kelly, Giovani Asproni, Jon Jagger, Alan Griffiths, Russel Winder, Thomas Guest, Peter Sommerlad are those I spotted on a quick glance. You would be in good company if you wrote for us, and we would help you every step of the way.. | https://accu.org/journals/overload/29/161/henney/ | CC-MAIN-2021-10 | refinedweb | 2,620 | 56.49 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
Orbit-idl uses a compiler to preprocess idl file. This idl file can contain #include
directives. If header file pointed in the include directive is missed the compiler
emits an error and changes its exit status to non-zero. GNU compiler doesnât stop
working at this and continues preprocessing the rest of the main source file. Thatâs
only its behavior and the output after the error is quite unreliable. However orbit-idl
doesnât check compiler output status and the error presence and continues to
process the output. Donât you think it is too bizarre to go on in the presence of
errors? If there is a missed header file, the errors probably will appear during the
build stage where source files use orbit-idl generated header and ultimately an
user will be frustrated by the incomprehensible errors (like unknown identifier used,
but not defined anywhere!). Orbit-idl should prevent these situations and report
about the problem timely. Nowadays there is no any diagnostics at all if .idl file
includes missed header file.
As another reason to check for compiler errors, you can imagine that not all
compilers continue working after error is spotted; Intel compiler is the case. In
such a case the header produced by orbit-idl is incomplete and cannot be used
properly.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. Compose an idl file (test.idl):
struct One {
long i;
};
#include <inexisting.h>
struct Two {
long j;
};
2. Try `orbit-idl test.idl`
Actual results:
No any diagnostics, error is suppressed, output files are produced.
Expected results:
Error or at least warning about missing file. A Developer or an user spot it and fix
the .idl file accordingly.
Additional info:
This may already be fixed in orbit-idl-2, which you should be using
rather than orbit-idl - can you give that a try?
This error happens while building evolution package. They calls orbit-idl, not orbit-
idl-2, so I concern only about orbit-idl.
With orbit-idl-2 the compiler errors are shown, but exit status is zero.
(Sorry about the inactivity here)
I'm pretty sure this has been fixed with ORBit2 and nothing really
uses ORBit anymore. Closing, but feel free to re-open if the problem
still exists with ORBit2 | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=92071 | CC-MAIN-2017-47 | refinedweb | 395 | 57.87 |
02 December 2010 22:27 [Source: ICIS news]
HOUSTON (ICIS)--December US styrene butadiene rubber (SBR) spot prices could climb as much as 3-5 cents/lb ($66-110/tonne, €50-84/tonne) because of increasing production costs and export opportunities to China, a producer and a trader said on Thursday.
Buyers were not immediately available for comment. However, it was expected that there would be some buyer resistance in the US because domestic demand was considered light to moderate.
Production costs were driven up by continued strength in natural rubber prices and by a rise in chief feedstock butadiene (BD), which climbed 2 cents/lb in December to settle at 86 cents/lb.
?xml:namespace>
“Right now, prices in China are about $1.30/lb, so unless we can push spot numbers up in the US, it doesn’t make sense not look at export markets,” the producer said.
ICIS-assessed non-oil grade 1502 SBR spot prices averaged just over $1.29 cents/lb CIF (cost, insurance and freight) China, while in the US, spot prices for 1502 material at the end of November were 107.00-113.00 cents/lb FOB (free on board) US Gulf (USG).
North American SBR suppliers include include Goodyear, International Specialty Products (ISP), Lion Copolymer and Negromex.
( | http://www.icis.com/Articles/2010/12/02/9416224/us-sbr-spot-prices-may-climb-on-production-costs-exports-to-china.html | CC-MAIN-2014-35 | refinedweb | 215 | 60.35 |
Other Aliaserfc, erfcf
SYNOPSIS
#include <math.h>
double erfc(double x);
float erfcf(float x);
long double erfcl(long double x);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
erfc():
- _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
erfcf(), erfcl():
- _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTIONThese functions return.
ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7).
CONFORMING TOC99, POSIX.1-2001, POSIX.1-2008.
The variant returning double also conforms to SVr4, 4.3BSD.
NOTESThe erfc(), erfcf(), and erfcl() functions are provided to avoid the loss accuracy that would occur for the calculation 1-erf(x) for large values of x (for which the value of erf(x) approaches 1).
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.org/erfcl/3 | CC-MAIN-2019-35 | refinedweb | 166 | 51.65 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.