text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
>
Jumping and framerate independence. Tried solutions, didn't like.
Here is simple example that demonstrates the point - empty scene, attached this to gameobject:
using UnityEngine;
using System.Collections;
public class jumpTest : MonoBehaviour
{
private float velocityY = 0;
private int counter = 0;
private float jumpHeight = 0.1f;
private Vector3 gravity = new Vector3(0,-.1f,0);
// Update is called once per frame
void Update ()
{
// gravity framerate independent
float gravityY = gravity.y * Time.deltaTime;
if (counter == 0)
{
// Jump is impulse, wont mult jumpHeight
// @see
// velocity framerate independent
velocityY += (jumpHeight + gravityY);
}
else
{
velocityY += gravityY;
}
if (velocityY < 0)
{
velocityY = 0;
}
transform.position = new Vector3 (
transform.position.x,
transform.position.y + velocityY,
transform.position.z
);
counter++;
}
}
So what do we have here - first frame execution - add "upward impulse" to gameObject. future frames: applies gravity to the gameobject, so that at some point going up, the gravity takes over and gameObject starts falling down. As soon as we register falling down, we STOP! the simulation and check the Y position.
I expect if this was framerate-independent we would get approximately the same Y position from multiple runs (difference in +/- size of last deltaTime as the update ticks differ from run to run), However, this aint the case. The y-position differs wildly from run to run. Also on mobiles where framerate dips lower, Y-position is higher. Framerate-dependent. How do we make this piece of code FPS independent?
Answer by Bunny83
·
Apr 23, 2016 at 02:49 PM
Actually doing what you did in your answer is what most people do. However it's still not frame independent. The error is quite small but still there. For most singleplayer applications the error can be ignored. However if we talk about multiplayer you want to ensure that all players get the same values.
Here's an example table of how time, acceleration, speed and position of an object advance depending on the way they are calculated. The assumed framerate is 10 frames per second for this example.
deltaTime(dt) = 0.1 (== 10fps)
time acc speed pos1 pos2 pos3(old + last + cur == new)
0.0 2 0.0 0.0 0.0 0.0
0.1 2 0.2 0.02 0.02 0.0 + 0.0 + 0.01 == 0.01
0.2 2 0.4 0.06 0.04 0.01 + 0.01 + 0.02 == 0.04
0.3 2 0.6 0.12 0.09 0.04 + 0.02 + 0.03 == 0.09
0.4 2 0.8 0.2 0.16 0.09 + 0.03 + 0.04 == 0.16
0.5 2 1.0 0.3 0.25 0.16 + 0.04 + 0.05 == 0.25
0.6 2 1.2 0.42 0.36 0.25 + 0.05 + 0.06 == 0.36
0.7 2 1.4 0.56 0.49 0.36 + 0.06 + 0.07 == 0.49
0.8 2 1.6 0.72 0.64 0.49 + 0.07 + 0.08 == 0.64
0.9 2 1.8 0.9 0.81 0.64 + 0.08 + 0.09 == 0.81
1.0 2 2.0 1.1 1.0 0.81 + 0.09 + 0.1 == 1.0
1.1 2 2.2 1.32 1.21 1.0 + 0.1 + 0.11 == 1.21
1.2 2 2.4 1.56 1.44 1.21 + 0.11 + 0.12 == 1.44
1.3 2 2.6 1.82 1.69 1.44 + 0.12 + 0.13 == 1.69
1.4 2 2.8 2.1 1.96 1.69 + 0.13 + 0.14 == 1.96
1.5 2 3.0 2.4 2.25 1.96 + 0.14 + 0.15 == 2.25
1.6 2 3.2 2.72 2.56 2.25 + 0.15 + 0.16 == 2.56
1.7 2 3.4 3.06 2.89 2.56 + 0.16 + 0.17 == 2.89
1.8 2 3.6 3.42 3.24 2.89 + 0.17 + 0.18 == 3.24
1.9 2 3.8 3.8 3.61 3.24 + 0.18 + 0.19 == 3.61
2.0 2 4.0 4.18 4.0 3.61 + 0.19 + 0.2 == 4.0
[pos1] -- the usual way to calculate movement
speed = speed + acc * dt
pos = pos + speed * dt
[pos2] -- the correct values determined by absolute time passed (t)
speed = acc * t
pos = (acc/2) * t²
[pos3] -- the tricky way how to get correct values when calculating accumulative each frame
pos = pos + (speed/2)*dt
speed = speed + acc*dt
pos = pos + (speed/2)*dt
As you might remember from physics class this is how you calculate the position if you have a uniform acceleration. This value is shown in the table as [pos2]. This is not caculated accumulative as we have to do it. Since the real world doesn't work in "frames" our splitting of the time into frames cause a lot of trouble. A linear change can be scaled by deltaTime without any problems since scaling by dt is a linear operation. However it doesn't work for a quadratic progression since we can't assume the values in between two points are linearly distributed which we do when we calculate the position as shown in [pos1].
[pos2]
[pos1]
The actual trick to get the correct answer independent from the framerate is to do this each frame:
add half of the last speed multiplied by "deltaTime" to pos.
update speed as usual by adding the acceleration multiplied by "deltaTime".
once more add half of the current speed multiplied by "deltaTime" to pos.
So it would look like this:
Vector3 velocity = Vector3.zero;
void Update ()
{
transform.position += velocity*Time.deltaTime / 2;
velocity += gravity * Time.deltaTime;
if (shouldJump)
velocity += Vector3.up * jumpHeight;
transform.position += velocity*Time.deltaTime / 2;
}
Note: adding a jump impule should be done only in one frame. This force should not be multiplied by Time.deltaTime. "shouldJump" is simply your condition when you want to perform a jump. This should be true for one frame.
Very throughout, appreciate it. Here, have a cookie <3.
Answer by _watcher_
·
Apr 23, 2016 at 10:49 AM
Here is the solution:
Change:
transform.position = new Vector3 (
transform.position.x,
transform.position.y + velocityY,
transform.position.z
);
To:
transform.position = new Vector3 (
transform.position.x,
transform.position.y + velocityY * Time.deltaTime,
transform.position.
Character floats upward when looking up and down
0
Answers
Jumping Issue on Different Computers
0
Answers
FixedUpdate and Time.deltaTime slowe
2
Answers
How can I stop the fps controller jumping up slopes?
1
Answer
Character Controller / Charactor Motor has no inputs?
1
Answer
|
https://answers.unity.com/questions/1175418/yet-another-timedeltatime-question.html?sort=oldest
|
CC-MAIN-2019-26
|
refinedweb
| 1,099
| 68.06
|
DEBSOURCES
Skip Quicknav
sources / subversion / 1.9.5-1+deb9u
subversion (1.9.5-1+deb9u3) stretch; urgency=medium
* Backport r1827688, fixing a regression introduced in the fixes for SHA1
collisions, where commits would incorrectly fail with a "Filesystem is
corrupt" error if the delta length is a multiple of 16K.
-- James McCoy <jamessan@debian.org> Fri, 20 Jul 2018 22:35:40 -0400
subversion (1.9.5-1+deb9u2) stretch; urgency=medium
* Backport r1759116, working around an issue in APR's trunc API. This is a
prerequisite for the SHA1/shattered fixes.
* Backport r1794527 and r1796725 to prevent the possibility of rep-sharing
between a directory rep and a file/prop rep.
* Backport r1795993 and r1796470 to reject commits which would introduce
hash collisions with existing data, thus addressing the SHA1/shattered
issue.
-- James McCoy <jamessan@debian.org> Sat, 30 Jun 2018 09:44:22 -0400
subversion (1.9.5-1+deb9u1) stretch-security; urgency=high
* patches/CVE-2017-9800: Arbitrary code execution on clients through
malicious svn+ssh URLs in svn:externals and svn:sync-from-url
-- James McCoy <jamessan@debian.org> Tue, 08 Aug 2017 23:04:58 -0400
subversion (1.9.5-1) unstable; urgency=medium
* New upstream release
+ Security fix
- CVE-2016-8734: Unrestricted XML entity expansion in HTTP clients
+ Fix corruption of "{DATE}" revision variable in swig-pl. (Closes:
#843138)
+ Remove patches:
- ruby-frozen-nil: Alternative fix committed upstream.
- Backported patches: perl-swig-crash, swig3.x-compat,
r1722164-swig-cppflags
* Fix #! lines for libsvn-{java,dev}.postinst. (Closes: #843292, #843288)
* Remove maintainer scripts that were handling pre-Jessie changes.
* Use dh_apache2's substvars in libapache2-mod-svn.
-- James McCoy <jamessan@debian.org> Tue, 29 Nov 2016 22:50:42 -0500
subversion (1.9.4-3) unstable; urgency=medium
* Build with hardening flags
* Backport patches/perl-swig-crash from upstream to fix crashes with the
Perl bindings, commonly seen when using git-svn. (Closes: #780246,
#534763)
-- James McCoy <jamessan@debian.org> Sat, 03 Sep 2016 14:45:04 -0400
subversion (1.9.4-2) unstable; urgency=medium
* Add Build-Depends on rename package and invoke rename instead of prename.
(Closes: #826057)
* Fix removal of .so/.la files for private libsvn_ra_{serf,local} from -dev
package.
* Replace use of debhelper's deprecated -s with -a
* Declare compliance with Policy 3.9.8, no changes required
* Use https URL for Vcs-Browser
-- James McCoy <jamessan@debian.org> Mon, 25 Jul 2016 22:48:13 -0400
subversion (1.9.4-1) unstable; urgency=high
* New upstream release.
+ Security fixes
- CVE-2016-2167: svnserve/sasl may authenticate users using the wrong
realm
- CVE-2016-2168: Remotely triggerable DoS vulnerability in mod_authz_svn
during COPY/MOVE authorization check
+ Remove merged patch ruby-test-unit.
+ Fix non-canonical path assertion in svn-graph.pl. (Closes: #702922)
+ Abort a commit on Ctrl-C. (Closes: #502222, #501971)
* d/rules: Remove an extraneous "done" to fix FTBFS when bash is $SHELL.
(Closes: #821930)
-- James McCoy <jamessan@debian.org> Wed, 27 Apr 2016 20:47:49 -0400
subversion (1.9.3-3) unstable; urgency=medium
* Remove transitional packages and maintainer snippets supporting upgrades
from pre-jessie systems.
* Enable libsvn-java on m68k and sparc64, since openjdk-8-jdk is now
available on those archs.
* Declare compliance with policy 3.9.7, no changes needed.
* Remove subversion-dbg package in favor of automatic -dbgsym package.
* Bump debhelper compat to 9.
* Fix FTBFS on mips(el) by working around GCC bug #816698
* Fix SWIG build issues
+ Backport patches/swig3.x-compat from upstream
+ Switch back to “Build-Depends: swig” (Closes: #817002)
-- James McCoy <jamessan@debian.org> Mon, 14 Mar 2016 00:34:52 -0400
subversion (1.9.3-2) unstable; urgency=medium
* Remove -Wdate-time from CPPFLAGS passed to swig. (Closes: #809054)
-- James McCoy <jamessan@debian.org> Fri, 15 Jan 2016 22:45:33 -0500
subversion (1.9.3-1) unstable; urgency=high
* New upstream release.
+ Security fixes
- CVE-2015-5259: Heap overflow and out-of-bounds read in svn:// protocol
parser
- CVE-2015-5343: Heap overflow and out-of-bounds read in mod_dav_svn
+ Fix dumps of no-op changes with “svnadmin dump”. (Closes: #803725)
+ Fix segfault when performing a diff when repository is on server root.
(Closes: #802611)
+ Fix translations of commit notifications. (Closes: #802156)
+ Fix authz with mod_auth_ntlm/mod_auth_kerb. (Closes: #797216)
+ Restore reporting (un)lock errors as failures. (Closes: #796781)
-- James McCoy <jamessan@debian.org> Tue, 15 Dec 2015 20:26:57 -0500
subversion (1.9.2-3) unstable; urgency=medium
* Re-enable libsvn-java on kfreebsd-*.
* Ensure swig2.0 is used to avoid build failures, until upstream figures
out how to work with swig >= 3.0. (Closes: #804389)
* Fix FTBFS with Ruby 2.2 (Closes: #803589)
+ Add ruby-frozen-nil patch to create a new Object instead of trying to
make modifications to the nil object.
+ Add ruby-test-unit patch to be compatible with the ruby-test-unit gem as
well as the older test-unit API provided by minitest.
-- James McCoy <jamessan@debian.org> Mon, 09 Nov 2015 19:22:18 -0500
subversion (1.9.2-2) unstable; urgency=medium
* Fix FTBFS with older Ruby versions by using RbConfig['vendorarchdir'] to
find the .a/.la files we're deleting.
-- James McCoy <jamessan@debian.org> Sun, 18 Oct 2015 22:10:03 -0400
subversion (1.9.2-1) unstable; urgency=medium
* New upstream release
+ Fix crash when saving credentials in kwallet. (Closes: #736879,
LP: #563179)
-- James McCoy <jamessan@debian.org> Wed, 23 Sep 2015 21:27:15 -0400
subversion (1.9.1-1) unstable; urgency=medium
* New upstream release
+ Remove direct use of svn_fs_open2 from libsvn_fs_x, thus fixing the
missing svn_fs_open2 symbol. (Closes: #795160)
* Enable gpg verification of new releases.
* Rename bash-completion file to svn and add symlinks for all other commands
which have completion. (Closes: #797648)
* debian/tests/libapache2-mod-svn: Stop apache2 before ending the test, to
avoid leaving stray processes running.
-- James McCoy <jamessan@debian.org> Mon, 07 Sep 2015 19:21:22 -0400
subversion (1.9.0-1) unstable; urgency=medium
* Upload to unstable
* New upstream release.
+ Security fixes
- CVE-2015-3184: Mixed anonymous/authenticated path-based authz with
httpd 2.4
- CVE-2015-3187: svn_repos_trace_node_locations() reveals paths hidden
by authz
* Add >= 2.7 requirement for python-all-dev Build-Depends, needed to run
tests.
* Remove Build-Conflicts against ruby-test-unit. (Closes: #791844)
* Remove patches/apache_module_dependency in favor of expressing the
dependencies in authz_svn.load/dav_svn.load.
* Build-Depend on apache2-dev (>= 2.4.16) to ensure ap_some_authn_required()
is available when building mod_authz_svn and Depend on apache2-bin (>=
2.4.16) for runtime support.
-- James McCoy <jamessan@debian.org> Fri, 07 Aug 2015 21:32:47 -0400
subversion (1.9.0~rc3-1) experimental; urgency=medium
* New upstream pre-release.
* Point the Vcs-* URLs at the right directory
-- James McCoy <jamessan@debian.org> Thu, 16 Jul 2015 19:39:54 -0400
subversion (1.9.0~rc2-2) experimental; urgency=medium
* Bump minimum JDK version to 1.6 in accordance with upstream change,
“javahl: requires Java 1.6 (r1677003)”
- This causes libsvn-java to no longer be available where gcj is the only
available Java implementation
-- James McCoy <jamessan@debian.org> Thu, 11 Jun 2015 22:29:08 -0400
subversion (1.9.0~rc2-1) experimental; urgency=medium
* New upstream pre-release. Refresh patches.
-- James McCoy <jamessan@debian.org> Tue, 02 Jun 2015 06:52:59 -0400
subversion (1.9.0~rc1-2) experimental; urgency=medium
* Install bash completion to /usr/share/bash-completion/completions
* Add dav_svn_get_repos_path2 symbol to apache_module_dependency patch.
(Closes: #786903)
-- James McCoy <jamessan@debian.org> Fri, 29 May 2015 20:07:32 -0400
subversion (1.9.0~rc1-1) experimental; urgency=medium
* New upstream pre-release. Refresh patches.
+ Remove backported patches libtoolize, ruby2.0-build-fixes,
test-failure-with-optimizations, CVE-2014-3580, CVE-2014-8108,
CVE-2015-0202, CVE-2015-0248, CVE-2015-0251.
+ New svn-vendor tool, alternative to svn_load_dirs.
+ svn-bench renamed to svnbench and moved to subversion package.
+ fsfs-stats tool replaced by the "stats" subcommand of the new svnfsfs
command.
+ Minimum supported version of serf bumped to 1.3.4.
+ pkgconfig files are available for the various libsvn_* libraries.
+ Fix “access forbidden” errors when performing a diff on a remote
repository when the user does not have access to the parent directory.
(Closes: #739278)
* debian/rules: Add new generated files to clean target
* debian/control:
+ Remove Troy Heber from Uploaders, at his request. Thanks for all the
fish!
+ Add dh-python to Build-Depends
-- James McCoy <jamessan@debian.org> Mon, 11 May 2015 19:56:48 -0400
subversion (1.8.13-1) unstable; urgency=medium
* New upstream release. Refresh patches.
- Remove backported patches CVE-2014-8108, CVE-2014-3580, CVE-2015-0202,
CVE-2015-0248, CVE-2015-0251, ruby2.0-build-fixes, and
test-failure-with-optimizations.
* Add patches wc-queries-test1-r1672295 and wc-queries-test2-r1673691, from
upstream, to fix wc-queries test failures with new SQLite versions.
(Closes: #785496)
-- James McCoy <jamessan@debian.org> Fri, 22 May 2015 02:43:09 -0400
subversion (1.8.10-6) unstable; urgency=high
* patches/CVE-2015-0202: Excessive memory use with certain REPORT requests
against mod_dav_svn with FSFS repositories
* patches/CVE-2015-0248: Assertion DoS vulnerability for certain mod_dav_svn
and svnserve requests with dynamically evaluated revision numbers
* patches/CVE-2015-0251: mod_dav_svn allows spoofing svn:author property
values for new revisions
-- James McCoy <jamessan@debian.org> Tue, 31 Mar 2015 22:51:18 -0400
subversion (1.8.10-5) unstable; urgency=medium
* patches/CVE-2014-8108: mod_dav_svn DoS vulnerability with invalid virtual
transaction names (Closes: #773315)
* patches/CVE-2014-3580: mod_dav_svn DoS vulnerability with invalid REPORT
requests (Closes: #773263)
-- James McCoy <jamessan@debian.org> Wed, 17 Dec 2014 00:11:03 -0500
subversion (1.8.10-4) unstable; urgency=medium
* control: Use "dh_install --list-missing" instead of --fail-missing to
avoid a FTBFS with parallel builds. (Closes: #768903)
-- James McCoy <jamessan@debian.org> Mon, 10 Nov 2014 22:19:02 -0500
subversion (1.8.10-3) unstable; urgency=medium
* Add a NEWS item describing that 1.7.x and later do not support having a
working copy which spans multiple filesystems. (Closes: #766285)
* rules: Needs more MAN3EXT so generated swig-pl Makefile never installs
files to debian/tmp with wrong extensions.
* Move some less frequently used tools to subversion-tools and include the
fsfs-* tools. (Closes: #764689)
* Switch from specifying gcj as the Java implementation to default-jdk.
(Closes: #737527, #421400)
- Remove patches/java-build
-- James McCoy <jamessan@debian.org> Sat, 25 Oct 2014 21:47:16 -0400
subversion (1.8.10-2) unstable; urgency=medium
* Add patches/test-failure-with-optimizations from upstream to fix test
failures with certain build configurations. (Closes: #757773)
* Add patches/libtoolize from upstream to support the Multi-Arch libtool
packaging. (Closes: #761789)
-- James McCoy <jamessan@debian.org> Wed, 24 Sep 2014 20:54:34 -0400
subversion (1.8.10-1) unstable; urgency=medium
* New upstream release. Refresh patches.
- Includes security fixes:
+ CVE-2014-3522: ra_serf improper validation of wildcards in SSL certs.
+ CVE-2014-3528: credentials cached with svn may be sent to wrong
server.
* debian/rules: Avoid an unnecessary call to dpkg-buildflags.
* debian/control: Pre-Depend on ${misc:Pre-Depends} instead of hard-coding
multiarch-support, as suggested by Lintian.
-- James McCoy <jamessan@debian.org> Tue, 12 Aug 2014 21:57:23 -0400
subversion (1.8.9-2) unstable; urgency=medium
* Use Perl's $Config{vendorarch} to determine where libsvn-perl's files were
installed. This enables being built against a Multi-Archified Perl.
(Closes: #752816)
-- James McCoy <jamessan@debian.org> Wed, 16 Jul 2014 08:46:24 -0400
subversion (1.8.9-1) unstable; urgency=medium
* New upstream release
* Merge changes from Ubuntu:
- Add DEB-8 test for Apache functionality
- debian/rules: Create pot file on build.
- debian/rules: Ensure the doxygen output directory exists
- Move svn2cl to subversion-tools' Suggests on Ubuntu.
-- James McCoy <jamessan@debian.org> Tue, 20 May 2014 22:45:32 -0400
subversion (1.8.8-2) unstable; urgency=medium
* Fix builds with ruby 2.x. (Closes: #739772)
-- James McCoy <jamessan@debian.org> Sun, 30 Mar 2014 22:46:58 -0400
subversion (1.8.8-1) unstable; urgency=medium
* New upstream release. Refresh patches.
- Remove backported patches sqlite_3.8.x_workaround & swig-pl_build_fix
- Fix integer overflows with 32-bit svnserv, which could cause an infinite
loop (Closes: #738840) or inaccurate statistics (Closes: #738841)
- Work around SQLite not honoring umask when creating rep-cache.db.
(Closes: #735446)
- Includes security fix:
+ CVE-2014-0032: mod_dav_svn crash when handling certain requests with
SVNListParentPath on (Closes: #737815)
* Add a subversion-dbg package. (Closes: #508147)
* Bump libdb5.1-dev → libdb5.3-dev (Closes: #738650)
-- James McCoy <jamessan@debian.org> Thu, 20 Feb 2014 20:38:10 -0500
subversion (1.8.5-2) unstable; urgency=medium
* rules: Move comment out of multi-line variable definition so configure is
run with the correct flags. (Closes: #735609)
* control: Remove libsvn-ruby1.8 Provides from ruby-svn.
* Add patches/swig-pl_build_fix, from upstream, to fix a build failure when
configure is run with --enable-sqlite-compatibility.
-- James McCoy <jamessan@debian.org> Fri, 17 Jan 2014 20:05:25 -0500
subversion (1.8.5-1) unstable; urgency=low
[ Peter Samuelson ]
* New upstream release. (Closes: #725787) Rediff patches:
- Remove apr-abi1 (applied upstream), rename apr-abi2 to apr-abi
- Remove loosen-sqlite-version-check (shouldn't be needed)
- Remove java-osgi-metadata (applied upstream)
- svnmucc prompts for a changelog if none is provided. (Closes: #507430)
- Remove fix-bdb-version-detection, upstream uses "apu-config --dbm-libs"
- Remove ruby-test-wc (applied upstream)
- Fix “svn diff -r N file” when file has svn:mime-type set.
(Closes: #734163)
- Support specifying an encoding for mod_dav_svn's environment in which
hooks are run. (Closes: #601544)
- Fix ordering of “svnadmin dump” paths with certain APR versions.
(Closes: #687291)
- Provide a better error message when authentication fails with an
svn+ssh:// URL. (Closes: #273874)
- Updated Polish translations. (Closes: #690815)
[ James McCoy ]
* Remove all traces of libneon, replaced by libserf.
* patches/sqlite_3.8.x_workaround: Upstream fix for wc-queries-test test
failurse.
* Run configure with --with-apache-libexecdir, which allows removing part of
patches/rpath.
* Re-enable auth-test as upstream has fixed the problem of picking up
libraries from the environment rather than the build tree.
(Closes: #654172)
* Point LD_LIBRARY_PATH at the built auth libraries when running the svn
command during the build. (Closes: #678224)
* Add a NEWS entry describing how to configure mod_dav_svn to understand
UTF-8. (Closes: #566148)
* Remove ancient transitional package, libsvn-ruby.
* Enable compatibility with Sqlite3 versions back to Wheezy.
* Enable hardening flags. (Closes: #734918)
* patches/build-fixes: Enable verbose build logs.
* Build against the default ruby version. (Closes: #722393)
-- James McCoy <jamessan@debian.org> Sun, 12 Jan 2014 19:48:33 -0500
subversion (1.7.14-1) unstable; urgency=medium
* New upstream version.
- mod_dav_svn: Prevent crashes with some 3rd party modules. (Closes:
#728352)
- Includes security fix:
+ CVE-2013-4505: mod_dontdothat restrictions bypassed by relative
requests (Closes: #730541)
+ CVE-2013-4558: mod_dav_svn assertion when SVNAutoversioning is
enabled.
* Bump compat to debhelper 8
* Use shlibs.local to handle intrapackage dependencies on private libraries.
* rules: Fix removal of libsvnjavahl-1.a/.la/.so from libsvn-dev. (Closes:
#711911)
* Remove obsolete conffiles under /etc/svn2cl. (Closes: #677990)
-- James McCoy <jamessan@debian.org> Fri, 27 Dec 2013 10:17:38 -0500
subversion (1.7.13-3) unstable; urgency=low
* Remove architecture exclusions for libsvn-java. (Closes: #710498)
* Fix multi-arch Python include paths. (Closes: #698443)
* Add strict Depends on libsvn1 to libapach2-mod-svn since the latter
leverages some internal APIs and therefore must be upgraded in lock step.
(Closes: #705464)
* Standards-Version 3.9.5 (no change needed).
* Add strict minimum Depends on libsqlite3-0 to work around lack of build
time dependency information. (Closes: #721878)
-- James McCoy <jamessan@debian.org> Sat, 16 Nov 2013 11:33:37 -0500
subversion (1.7.13-2) unstable; urgency=low
* Remove unnecessary libapache2-svn.prem. (Closes: #726717)
-- James McCoy <jamessan@debian.org> Fri, 18 Oct 2013 23:23:06 -0400
subversion (1.7.13-1) unstable; urgency=low
[ Peter Samuelson ]
* New upstream version. (Closes: #719476)
- patches/CVE-2013-1968.patch, patches/CVE-2013-2112.patch: remove,
obsoleted
- Includes security fixes:
+ CVE-2013-4131: Remotely triggered crash in mod_dav_svn (Closes:
#717794)
+ CVE-2013-4277: Local privilege escalation vulnerability via symlink
attack (Closes: #721542)
+ CVE-2013-2088: Arbitrary code execution in check-mime-type.pl and
svn-keyword-check.pl contrib scripts.
[ James McCoy ]
* Add myself to uploaders.
* Acknowledge NMUs.
* Canonicalize the Vcs-* URLs. Thanks, Lintian.
* Remove Guilherme de S. Pastore from Uploaders. (Closes: #698270)
* Add Breaks: svnmailer (<< 1.0.9) to python-subversion. (Closes: #726491)
* Remove obsolete conffile /etc/emacs/site-start.d/50psvn.el. (Closes:
#705033)
-- James McCoy <jamessan@debian.org> Wed, 16 Oct 2013 20:53:11 -0400
subversion (1.7.9-1+nmu6) unstable; urgency=low
* Add Breaks/Replaces: libapache2-svn to libapach2-mod-svn.
-- James McCoy <jamessan@debian.org> Tue, 01 Oct 2013 00:28:55 -0400
subversion (1.7.9-1+nmu5) unstable; urgency=low
* Non-maintainer upload.
* Re-enable libapache2-svn build (Closes: #725028)
* Adjust packaging for Apache 2.4 compatibility (Closes: #712004)
- Rename libapache2-svn to libapache2-mod-svn and add a transitional
package
- Add apache2-dev & dh-apache2 to Build-Depends
- Add apache2-api-20120211 as a Depends for libapache2-mod-svn
- Update maintainer scripts to use apache2-maintscript-helper
-- James McCoy <jamessan@debian.org> Mon, 30 Sep 2013 19:02:34 -0400
subversion (1.7.9-1+nmu4) unstable; urgency=low
* Non-maintainer upload.
* patches/ruby-test-wc: New patch from upstream to fix a stray case of a
testsuite failure due to APR 1.4 hash randomization. Thanks to
Michael Gilbert for digging this up. (Closes: #705364)
* Use --disable-neon-version-check to build libsvn_ra_neon against libneon27
0.30.0.
* Add handling of directory to symlink conversions for
/usr/share/doc/libsvn-{dev,java,ruby,ruby1.8}. (Closes: #690155)
-- James McCoy <jamessan@debian.org> Mon, 02 Sep 2013 21:11:08 -0400
subversion (1.7.9-1+nmu3) unstable; urgency=high
* Non-maintainer upload.
* Disable libapache2-svn build (closes: #712004, #666794)
-- Julien Cristau <jcristau@debian.org> Tue, 09 Jul 2013 19:56:11 +0200
subversion (1.7.9-1+nmu2) unstable; urgency=high
* Non-maintainer upload.
* Add CVE-2013-1968.patch patch.
CVE-2013-1968: Subversion FSFS repositories can be corrupted by newline
characters in filenames. (Closes: #711033)
* Add CVE-2013-2112.patch patch.
CVE-2013-2112: Fix remotely triggerable DoS vulnerability. (Closes: #711033)
-- Salvatore Bonaccorso <carnil@debian.org> Thu, 06 Jun 2013 13:14:52 +0200
subversion (1.7.9-1+nmu1) unstable; urgency=medium
* Non-maintainer upload.
* Convert SVN_STREAM_CHUNK_SIZE to an integer in svn/core.py (closes: #683188).
-- Michael Gilbert <mgilbert@debian.org> Fri, 12 Apr 2013 00:58:01 +0000
subversion (1.7.9-1) unstable; urgency=medium
* New upstream version. Some DOS fixes in mod_dav_svn:
-
* patches/python-swig205, patches/g++47: Remove as obsolete.
* Don't make python-subversion 'Depends: subversion'. It is quite
usable on its own.
* Move libsvn1 to section 'libs'. (Ref: #700145)
* Update watch file. (Closes: #672157)
-- Peter Samuelson <peter@p12n.org> Sat, 06 Apr 2013 16:16:37 -0500
subversion (1.7.5-1) unstable; urgency=low
[ Peter Samuelson ]
* New upstream version. (Closes: #621692, #656966)
-
-- Peter Samuelson <peter@p12n.org> Sat, 16 Jun 2012 23:56:38 -0500.
-- Peter Samuelson <peter@p12n.org> Fri, 08 Jun 2012 00:04:19 -0500.
-- Peter Samuelson <peter@p12n.org> Sun, 03 Jun 2012 17:54:15 -0500
-- Ondřej Surý <ondrej@debian.org> Tue, 29 May 2012 15:49:32 +0200
subversion (1.6.17dfsg-3) unstable; urgency=medium
* libapache2.preinst: Fix upgrade case from before 1.6.17dfsg-2.
* libapache2.prerm: 'a2dismod' modules in reverse dependency order.
* patches/apache_module_dependency: New patch to allow mod_authz_svn to
load before mod_dav_svn and still use its functions.
All these together, Closes: #642250.
* Remove a bit more autofoo in 'clean' target.
-- Peter Samuelson <peter@p12n.org> Sat, 19 Nov 2011 18:56:28 -0600
subversion (1.6.17dfsg-2) unstable; urgency=low
* Standards-Version: 3.9.2. Also, multiarch.
* Move to debhelper level 7.
* patches/perl-warning: New patch to suppress a bogus Perl undef warning.
(Closes: #422699)
* patches/swig2-compat: New patch from upstream to build with swig 2.x.
(Closes: #634049)
* patches/perl-compiler-flags: New patch from upstream to address an
issue brought to light by Perl 5.14. (Closes: #628507)
* patches/sasl-mem-handling: New patch from upstream to fix a crash with
svn:// URLs and SASL authentication. (Closes: #631765)
* patches/svn2cl-upstream: Use --non-interactive in svn2cl to avoid
hanging on, e.g., password prompts. (Closes: #443860)
* patches/python-exception-syntax: New patch: Fix a couple instances of
literal string exceptions in Python, which don't work in 2.6+.
(Closes: #585358)
* Remove some preinst/postinst magic that hasn't been needed in years.
* Split authz_svn.load away from dav_svn.load, since most users do not
need both. New installs will enable only dav_svn by default.
* Restart apache in libapache2-svn postinst. (Closes: #610236, #628990)
* Improve symbols file with (regex)__ catchall for private symbols not
otherwise accounted for. (Closes: #607544) I'm also including a
workaround for rapidsvn, to be removed when 0.14 is released.
* Add ${misc:Depends} everywhere. Drop libsvn-java dependency on a jre.
Thanks, Lintian.
* Remove the extra copy of jquery supplied by doxygen, from libsvn-doc.
Doesn't seem to even be used. Thanks, Lintian.
* patches/po: New patch from Laurent Bigonville to fix minor issues in
fr.po and ja.po. (Closes: #607381)
* Move to dh_lintian, and fix up the overrides a bit.
-- Peter Samuelson <peter@p12n.org> Thu, 15 Sep 2011 12:02:03 -0500
subversion (1.6.17dfsg-1) unstable; urgency=high
* New upstream version. Includes security fixes:
- CVE-2011-1752: Remotely triggered crash in mod_dav_svn
- CVE-2011-1783: Remotely triggered memory exhaustion in mod_dav_svn
- CVE-2011-1921: Content leak of certain files marked unreadable
* svn-bisect: Support $SVN environment variable, requested by Daniel
Shahaf upstream.
* Update Lintian overrides to account for python through 2.9,
in case that ever comes to be.
-- Peter Samuelson <peter@p12n.org> Wed, 01 Jun 2011 17:07:33 -0500
subversion (1.6.16dfsg-1) unstable; urgency=high
* New upstream version.
- Fixes CVE-2011-0715: Remotely crash mod_dav_svn anonymously via a
lock token.
* patches/change-range: New patch to support -cA-B syntax on command line.
* Stop using svn-make-config.c; we can do the same just by running svn
itself in a controlled home directory. Delete debian/tools/.
-- Peter Samuelson <peter@p12n.org> Thu, 03 Mar 2011 10:55:42
subversion (1.4.2dfsg1-2) unstable; urgency=medium
[ Peter Samuelson ]
* rules: fix 'dontberoot' target not to run when it shouldn't.
(Closes: #396435)
* Add subversion-tools Conflicts: kdesdk-scripts (<= 4:3.5.5-1).
I'm told that their next release will remove the 'svn-clean' script,
which is quite similar to the one in subversion-tools. (See: #397874)
* Add manpages for svn-clean, svn-hot-backup, svn-fast-backup, and
svn-backup-dumps. Troy Heber helped write the last three.
* Ship svnmerge.README in subversion-tools.
-- Peter Samuelson <peter@p12n.org> Fri, 10 Nov 2006 08:45:01 -0600
subversion (1.4.2dfsg1-1) unstable; urgency=low
[ Peter Samuelson ]
* New upstream release.
- No longer ships IETF draft spec. (Closes: #393414)
- patches/svnsync-manpage, parts of patches/neon26, patches/svnshell:
Obsolete, removed.
- Re-roll upstream tarball to remove some unlicensed files from the
"contrib" directory. Update debian/copyright regarding other files
in "contrib". (Closes: #394395)
* patches/neon26: update for 1.4.2, add neon 0.26.2 to the whitelist.
* Improve libapache2-svn installation experience:
- Use a2enmod/a2dismod instead of hand-hacking.
- dav_svn.conf: Comment everything out. (Many will want to use
sites-available/* rather than dav_svn.conf anyway.) Fix some of
the text and add more. (Closes: #392805)
* libsvn-java: Remove alternative Depends: java1-runtime.
It does in fact require JRE 1.2 (java2-runtime).
* Build with neon26 instead of neon25.
* Ship some example code from upstream in the various devel packages.
- patches/examples-compile-instructions: New patch, some small doc fixes.
* Ship a lot more scripts in subversion-tools, including svnmerge
(Closes: #293528), svn2cl (Closes: #350133).
- List these scripts in the Description. (Closes: #357506)
- Downgrade most Depends to Recommends, augment Recommends and Suggests
to match the scripts.
* rules: Add explicit check and informative error message for trying to
build as root. (Closes: #396435)
* libapache2-svn Description: it's Apache 2.2, not 2.0. (Closes: #397113)
* patches/ruby-test-ra-race: replace my fix by upstream's better one,
should _really_ fix m68k build this time. (Closes: #397173)
* patches/jelmer-python-bindings: New patch: backport python binding
improvements by Jelmer Vernooij from trunk. This is needed for
certain advanced python-based tools.
-- Peter Samuelson <peter@p12n.org> Thu, 9 Nov 2006 00:07:42 -0600
subversion (1.4.0-5) unstable; urgency=medium
[ Peter Samuelson ]
* rules: Set HOME to a dummy value to prevent a build failure if the
real HOME is mode -x. Plus a few minor cleanups.
* rules: Link -ldb explicitly (rather than implicitly via -laprutil-1).
This is required for libdb symbol versioning to propagate.
Thanks to Pitr Jansen for help tracking this down.
* patches/svnshell: Fix insufficient argument checking in 'setrev'
command. (Closes: #392004)
--
subversion (1.4.0-3) unstable; urgency=low
[ Peter Samuelson ]
* patches/ruby-test-ra-race: New patch for another testsuite race
discovered on m68k.
* patches/ruby-typemap-digest: New patch to fix a m68k failure, quite
possibly the same failure we've seen sporadically on other arches
in the past. Thanks to Roman Zippel. (Closes: #387996)
* rules: sed *.la to change internal *.la references to -l form.
(Closes: #388733)
* control,rules: Reinstate libsvn-javahl as a dummy package, for
sarge upgrades. (Closes: #387901)
* control,rules: Disable Java on hurd-i386, requested by Cyril Brulebois.
* Build with apache 2.2 / apr1 / aprutil1 again, now that apache 2.2 is
going into unstable.
- aprutil1 always links to libdb4.4 nowadays. (Closes: #387396)
* libapache2-svn.postinst: Do not enable the dav_fs module: not needed
for a Subversion server.
[ Troy Heber ]
* debian/control clean up of Maintainer and Uploaders fields to reflect the
current team.
-- Troy Heber <troyh@debian.org> Tue, 3 Oct 2006 07:45:31 -0600
subversion (1.4.0-2) unstable; urgency=low
[ Peter Samuelson ]
* Run tests in 'build' target, not 'binary' target. This prevents a
build failure if 'binary' is run as root (not fakeroot).
* patches/svnsync-manpage: trivial typo fix from upstream.
* Delete README.db4.4: the upgrade procedure it describes is now fully
automatic.
-- Peter Samuelson <peter@p12n.org> Sun, 10 Sep 2006 05:05:47 -0500
subversion (1.4.0-1) unstable; urgency=low
[ Peter Samuelson ]
* New upstream version - well, not really new, it's rc5 rebranded.
* Revert libsvn1/apache2.2 change, since apache 2.2 is not yet in
unstable. libsvn1 is libsvn0 again, for now.
* patches/no-extra-libs: detect apr0/apr1 correctly, and use
pkg-config for neon.
* patches/neon26: new patch to build cleanly with neon 0.26.1.
Though we won't actually use it until #386652 is fixed.
* Document BDB 4.4 upgrade better; also, move the NEWS entry from
'libsvn0' to 'subversion' where it is more likely to actually be
read.
* patches/no-extra-libs-2: Tweak to remove more unnecessary linking.
--
subversion (1.4.0~rc4-2) experimental; urgency=low
[ Peter Samuelson ]
* Reenable apache support; build-depend on apache2-threaded-dev 2.2,
now that it's in experimental.
* Build-Depends: remove bison, relax python version again (as python
handling is now done by python-support).
* patches/ruby-txtdelta-apply-instructions: new patch from upstream,
fixes the test failure on amd64.
* Compile against libdb4.4, which should fix the famous "wedged
repository" issue.
- Build-Depends: libaprutil1-dev (>= 1.2.7+dfsg-1)
- Update rules, control, README.db4.4
- Add note to libsvn1.NEWS - please read it!
-- Peter Samuelson <peter@p12n.org> Fri, 18 Aug 2006 13:06:49 -0500
subversion (1.4.0~rc4-1) experimental; urgency=low
* There is a known issue with amd64 and the SvnDeltaTest in the ruby
testsuite.
[ Peter Samuelson ]
* New upstream release.
- commit-email.pl has option not to send diffs. (Closes: #217133)
- Help text clarified for options like --file. (Closes: #233099)
- Rediff patches. Delete patches already included upstream:
apache-crash-fix, bash_completion, lc_ctype, perl-test-clean,
svn_load_dirs-symlinks, swig-1.3.28.
- Add Build-Depends: zlib1g-dev.
* Bump subversion-tools dependencies on the other packages to >= 1.4.
* Support ENABLE_APACHE macro, to disable 'libapache2-svn'.
Disable apache until apache 2.2 makes its way into experimental.
* Switch to libapr1, which entails an ABI change to libsvn.
- libsvn0 -> libsvn1
- libsvn0-dev -> libsvn-dev
- patches/apr-abi: New patch: change the libsvn_* SONAMEs.
(This type of change should be upstream-driven, but upstream has
declined to do it.)
- patches/fix-bdb-version-detection: New patch: tweak BDB version
detection not to rely on an apr-util misfeature (#387105).
* Rename libsvn-javahl to libsvn-java, to comply (in spirit) with the
Java Policy. (Closes: #377119)
* Rename libsvn-core-perl to libsvn-perl, because it provides several
modules in the SVN:: namespace, not just SVN::Core.
* patches/limit-zlib-link: New patch from upstream to prevent
unnecessary -lz linkage in client binaries.
* Update copyright file again.
* Switch to python-support.
* subversion-tools: downgrade rcs and exim4 to Recommends.
* Add NEWS entry to libsvn1, explaining compatibility issues - please
read it, folks!
[ Troy Heber ]
* tweaked rpath patch HUNK 2, so it would apply cleanly.
-- Peter Samuelson <peter@p12n.org> Thu, 10 Aug 2006 20:43:19 -0500
subversion (1.3.2-6) unstable; urgency=low
[ Peter Samuelson ]
* Add libsvn0 Conflicts: subversion (<< 1.3) to prevent chaos from
linking to both neon24 and neon25.
* Add libsvn0 Conflicts: python2.3-subversion (<< 1.2.3dfsg1-1)
because of the libsvn_swig_py move. (Closes: #385146)
* Link with Berkeley DB 4.4. (Closes: #385589, #383880 again)
- patches/bdb-44: new patch cobbled together from upstream trunk
* patches/ruby-test-svnserve-race: update from our 'sleep 3' hack to
what I hope is a proper fix. Thanks to Kobayashi Noritada, Wouter
Verhelst and Roman Zippel. (Closes: #378837)
* Switch to python-support.
--
subversion (1.1.4-2) unstable; urgency=high
* Put call to dh_installdeb after call to dh_python. Fixes purge
of python2.3-subversion (closes: #308777)
* Disable full testsuite. All archs have already past it. No need to
burden the autobuilders with all the tests.
* Set DEB_BUILDDIR correctly since something is apparently setting it
incorrectly.
-- David Kimdon <dwhedon
|
https://sources.debian.org/src/subversion/1.9.5-1+deb9u3/debian/changelog/
|
CC-MAIN-2019-26
|
refinedweb
| 5,095
| 53.98
|
OOP gives data hiding, i.e., a nonmember function cannot access an object’s private or protected data.
Have you heard about Friend function?
We use this function In object-oriented programming. A friend function, that is a “friend” of a given class, is a function that is given the same access as methods to private and protected data.
It works same as methods of class specially on the Encapsulation.
For accessing the data, the declaration of a friend function should be made inside the body of the class (can be anywhere inside class either in private or public section) starting with keyword friend.
Let’s Take an example
#include using namespace std; class C1; // Forward declaration of class c1 in order for example to compile. class B1{ private: int a; public: B1(): a(0) {} void show(B1& x, C1& y); friend void show(B1& x, C1& y); // declaration of global friend }; class C1 { private: int b; public: C1(): b(6) {} friend void show(B1& x, C1& y); // declaration of global friend friend void B1::show(B1& x, C1& y); // declaration of friend from other class }; // Definition of a member function of class B1; this member is a friend of class C1 void Bar::show(B1& x, C1& y) { cout << "Show via function member of B1" << endl; cout << "B1::a = " << x.a << endl; cout << "C1::b = " << y.b << endl; } // Friend for Bar and Foo, definition of global function void show(B1& x, C1& y) { cout << "Show via global function" << endl; cout << "B1::a = " << x.a << endl; cout << "C1::b = " << y.b << endl; } int main() { B1 a; // class B1 and object a of class B1 C1 b; show(a,b); // method function a.show(a,b); // friend function call }
Let’s solve a question
Find age using friend function in class
In this question if date month and year of birth is given we just have to find the Age of that person.
But we have to use class and friend function for that specific task.
Solution
#include<cstdio> #include<iostream> using namespace std; class student; // class synatx prototype class date { int year; int dd; int mm; public: date(int a,int b,int c) { dd=a; mm=b; year=c; } //date class friend student; // friend function }; class student { date DOB; char name[50]; public: student(char *str):DOB(3,10,1996) { strcpy(name,str); cout<<"Name="<<name<<endl; cout<<"Date of Birth"<<endl<<DOB.dd<<"/"<<DOB.mm<<"/"<<DOB.year<<endl; } int duration() { int cd,cm,cy; int d1,d2,d3; cout<<"\nEnter the current date ,month and year\n"; cin>>cd>>cm>>cy; if(DOB.dd<cd) d1=cd-DOB.dd; else d1=DOB.dd=cd; if(DOB.mm<cm) d2=cm-DOB.mm; else d2=DOB.mm-cm; d3=cy-DOB.year; cout<<"Age= "<<d1<<"/"<<d2<<"/"<<d3; return 0; } }; int main() { student s("Mariam"); s.duration(); //getch(); }
II Method
#include<stdio.h> #include<conio.h> #include<iostream.h> class student; class date { int year; int dd; int mm; public: date(int y) { dd=03; mm=10; year=y; } friend student; }; class student { date DOB; char name[50]; public: student():DOB(1996) { strcpy(name,"Mariam"); cout<<"Name="<<name<<endl; cout<<"Date of Birth"<<endl<<DOB.dd<<"/"<<DOB.mm<<"/"<<DOB.year<<endl; } int duration() { int d; d=2014-DOB.year; cout<<"Age=="<<d; } }; int main() { student s; s.duration(); get
|
https://coderinme.com/find-age-using-friend-function-in-class/
|
CC-MAIN-2019-13
|
refinedweb
| 559
| 61.56
|
HAWAIIAN GAZETTE. : ; TUESDAY; DECEMBER' 2$,
1917. r-SEMT.WEgKLY.
PEACE TEUuS
FAIL IB IET
TEUTON IDEAS
Flat Turning Down of Proposals
Brfrrtjs Abrupt Cessation To
Pourparlers and .Delegates
Called Back To Petronrad ;
GERMAN RULERS HAVE fflO '
; ; ; ; IDEA' OF YIELDING
urbanization qi new unraine
C-!-i!-.- - ff ' tl III
:' Government Goes. Forward
r, ; Rapidly and Troops Seize Varl
f'f ous Staff Headquarters, i .
; f . ON DON, Decembcr23(As
JL sbciated Press) , The Le-
nine-Trotzky government, at :Pe
trograd has already discovered.
was promised, in the negotiations
leading up to' the armistice' and
.that f he German rulers have no
intent rt mAAftinn Ua t
Jhe ,Bolsheriki in'the matter of a
cessation of hostilities.' The terms
which the Russian delegates to
the peace conference had' been
authorized t accept have "been
flatly refused by the German dele
gates and the Russians have been
ordered to cease further attempts
at peace and to return to Petro-
grad. . ,.--V ,. .
;Such is the information tele
era nKiil 'vctriav vw !t Pi-a-
grad correspondent to the 'Daily
Mail, who. states that he fas been
able ( to verify this news. ., The
peace pourparlers! have definitely
ceased for the present at . least. ;
Want No Kalserlam
In a speech dealing wi$ the .'ir'rf
com-ilulile terms proKsed by Germany
aa opposed to those desired by he
uoianeviM. 1-remter Trotakv declared
to the Council of Soldier and Work
men, IVputlee. to Friday that he would
not agre ttfrrf thoffenslv pro
jk)hU emanating from Berlin, and that
Jiusain' would resume her fight rather
than accept Berliu-dit-tated treaty.
"We did not overthrow curiam to
kneel before the kaiser," be said.
"Hut if through our exhaustion' we
had to accept the kaisera term,, we
would only do so to rise with the Ger
man people against militariiin." :
TJkraino Organized
Io the meanwhile the organization of
the independent Ukraine government la
nroceedinir ranidlv and at ana an heinv
taken to reaume the fighting againat
the Tentona along the aouthweatera
Rumanian, front and to protect the
I'kralnian border againat Bolaheviki
attKeaaion. A 1'etrograd despatch to
the Timea atatea that Kaledin and hi
niisoc.iatea in the military government
of the Ooaaacka have all resigned, and
hnve resumed their positions in their
army, the . government having passed
into eivll hands.
The Vkianiuu troopa have seized all
the headtjuartera of the varioua ataffi
along the Jtumuninn front and have
taken over the telegraph and wlreleaa
ayvteina connecting theae headquarter
wun rue naming nuea. Tne army, com
posed of Vkranians and Cossacks, U
now united under the supreme com
mand of General BtyheroatAeff. ' Ukra
nian troopa have been atationed along
the Ukrauian borderi on the Russian
front and have taken up their posi
tlona, while the mobilization of all the
Ooaaacka, to form the main army of
offenau on the aouth, has been onrered.
mil i niv io nniitn'
uAILLftUAlo rAblfld
Chamber of Deputies Withdraw
Immunity From Arrest ;
PARIB,. December 23 (Associated
Treaa) By 'a resolution . which . was
adopted by a vote of 472 to B the
ehainber of deputies yesterday deprived
Jnaeph Cailluux of liia parliamentary
iinmuuity from arreat in c ouuectiou
with the charge of treaaonable com
Jiiunicutious with the army. The charge
njruiuNt him will now be preaaed, ,
The action of the chamber of depu
tie follow an iuveatigation of charge
that were made againat the one time
premier. To theae charge he wide
denial before the chamber. Document
which clearly pointed to treasouabl
courses were presented and theae he
asserted were forgeries '
' , .
JEW ORLEANS SUFFERS
HEAVY LOSS BY FIRE
N KV O RI.KAN8, Deeember 22
Associntd Press) Damage to the ex
tent of a quarter of a million dollar
wn done ly a. fire which broke out
today in the wholesale diatriet of thf
city. Three four story building were
destroyed by the blaae which raged
fiirioualy for half an hour before it
ould ba checked.
CHARGES OF TREASON
ITALY'S FORCES DRIVE HUNS
FAR BACK ALONG THREE MILE
FRONT GAINING LOST GROUND
jlKADqrARTERH ITALIAN ARMY, Northern Italy, DetemW jfcVfAsso."
t'ialed irnm)-u-t a auceeaalon f brilliant attack throughout Kriday and yea
terday, the force of. General lJia have defeated the Auatro Oermana and driv
en them "back from all their holding on the alqfiea pf, Monte. Aaolone. .,
Along a front of three mile and to a depth of two third of a mile the I'ai
lana have hammered the Tentona back, taking prlaoner and a number of guu
and making Certain the complete poeion of the approache to the 8a a I.oren.
ao Paaa, (he road dowa which, the invader must pa on thl aeetor if tliy
would reach the Baai-ano plnlna. ,' ' . i
, VICTOET 18 CONSIDERED NOTABLB
. The Italian victory la the more notable in that it eoniea a a counter-attack
to a tremendou offenniv on the part of the ehemy, .for 'Which troopa wer
i iaaaed and many gun concentrated. In the Initial attacK, the men of Oeneral
von Bulow gaine r the beighta of Monte Balome, rrOaaing the Bronta River and
threatening to aeoure the important road into Italy. ' The Oerniaa loaaea were
tremendona, but the order ef the German atalf were to aele the height com
manding the pa a4: any-coat. t When .the' fourth drive gained the height, the
Italian were obliged te fall back and reform their line. ' " "
VANTAOB POINTS ABB REGAINED
j In the fare of thi partial .defeat, the Italian have now not fnly regained
all the lost ground but have wrrated from the invader aome of the vantage
iKrint they bad aeeured la the1 first drive into Italy on the Aaiagn Plateau. The
Italian morale ia, completely restored and the supplies rushed to them by Frauce
and Oreat Britain have enabled them to withatand what ia regarded a the
cliniat of Auatro Oerman effort. The invader are being more and more handi
capped with the anew and the difficulty of tranaportation and it may yet be
ahown that Geueral t'julorna whs prophetic whea he stated that the victory of
the Teutons in puahlng the Italians back on the defensive on their own soil
might yet prove to he the most eostlv' movement thev have vet made la the war
OTHEE ATTEMPTS ABB FRUSTRATED .-
All attempt of the ehemy to crosa the Lower Piave towards' Venire have
apparently been abandoned, after the fruitless and costly effort of early In the
week. The obaervera have reporte4 that the enemy 1 withdrawing both men
and guna from the Adriatie aeetor and hurrying theae north. '
VALISE IS MISSING
AND HUNS
1RID
Important ; Diplomatic Papers
Disappear On Journey From V
. Switzerland To Berlin x -
OE.VEVA, December. 23(Asociat
ed Prea) -Evident alarm '. had grave
worry over the lose of a valise whicli
contained '. papers of importance is'oat
hibited( at the German legation hern.
It ia surmised that the contents of the
valise was such that the Oertnan for
eign -office is a loath to have them
fall into the hand of representative
of some neutral, nation a any of the
Entente Power. 1 r ". - -.
The missing valise was loat, it was
reported to the police, at the Basle Na
tion while en route to Berlin. It cou
tained the report said, important dip
lomatic paper.
Whether the valise was atolen or mis
laid and picked up by some stranger
ia not disclosed but it 1 suspected that
aecret service men of . some other aa-1
tion which may
content set u red
be interested In the
it. vAr--7. ;
IER
OF KEETS CHILD
Man Convicted of Abduction Tells
7 All and Implicates Others
In Grave Crime
' SPBINOnELD,' MiaaourV Deeember
2.1 (Associated Press H-Claode Pler
aon, eon vict ed. a short time ago of the
abduetioa of Baby ; '. Keets and sea
teneed to serve thirty-five years ia the
Missouri penitentiary, is now charged
with the murder of the child. He has
made a clean breast of the whole af
fair In a signed eoofesalon to which he
Was sworn yesterday, and Implicates
number of others Jn the erims.
The abduction' of the Keets child
atirred all the Middle West, and when
the body of the little one wua found in
a well feeling against the kidnapers
ran high.: Piersoa was convicted and
given a sentence which was considered
almost equivalent to life imprisonment.
He has broken down under the strain
aad hi eonfsssion is the remit. As a
reward foe the eonfessiou and the aid
he ia to give in the conviction of others
it ia expected that his penalty will be
life impriaonmeq instead of death.
: : .
ARMOURS GET PROFITS
FROM STOCKYARDS ALSO
WASHINGTON, December 23 (As
sociated Pre) The inquiry into the
stockyard ownership and meat indus
try eontrol, by a congressional commit
tee, waa resumed, today, One witness
gave evidence indicating that the big
Armour company is the real owner of
the stockyards ia many Central West
aad .Western cities.
, . ;
PROPOSES RELIEF FOR
FARM LABOR SHORTAGE
WASHINGTON. December 22 (As-
sociated Pre) Secreatrv of War
Baker today outlined a plan to permit
farm, boys of the annv to return it)
their home at periodical interval and
assist in crop production.
me plan was auggeatml by Mi'.
he mad today to a delegation of New
York farmers Who urotested auairwt 1
Line scarcity of labor caused by th.i
taking or yonng men for the training
eampa.
.
BILIOUS HEADACHE
All that is needed la to correct the
biliouaaeae and the headache dianu
pears. . Take ('hamberlnin 'a Tablets
and you will aoou lie aa well aa ever.
For sal by all dealers. Benaon, Hinith
t Co., Ltd., Agfa, for Hawaii. Adver
CONFESSES IRE
FIRS
T GOVERNOR OF
ARIZONA WINS CASE
Election Contest 'Goes gainst
Republican Seated By Lower
Court's Ruling v
PHOENIX, Arizona, December 2.1
(Aaaociated -Press) Following an elec
tion contest. which haa been in the
courts for more than a year and en
gendered much bitterness aa well as
seriously hampering the work of the
legislature at its biennial . session,
(leorge W. P. ITunt was yesterday de
clared by 'the state' aupreme court to
be the' rightful governor of Arisona.
This will unseat the incumbent, Tom
Campbell, a Republican, who has oeeu-
pieu ine executive onices lor only a
few months.
Oeorge W. P. Hunt wa the president
of the Arizona constitutional conven
tion and waa elected as the first gov
, eruor of the state. To this office he
w" reelected and waa nominated for a
third term by the Democrat for a third
term in the summer of 1910. The elec
tion waa very close and the secretary
of state for some time refused to iasue
a certificate of election. He finally is
aued one to .Hunt and then a contest
was atarted. Both Campbell aad Hunt
took the oath of office and Hunt retain
ed possession of the office during the
court proceedings attendant nnon a re-
J count in which numerous' discrepancies
were rouna ana marges of fraud made
on both aide. The lower court decided
for Campbell and Hunt turned the oflice
over to him but took an appeal. '
Hunt has been the special representa
tive of labor interests in his policies as
governor of the state and had the bit
ter opposition of the great corporate la
Urea of the state.: -. .
" ' ' '
P
AUSTRALIA RESIGNS
Defeat of Conscription Carries
Government Down Also
SYDNEY, December 22 (Special to
The Advertiser) Following the signal
defeat of conscription in Australia for
the second time, which 1 tantamount to
a vote of no confidence in the govern,
ment, William Hughes, premier of the
commonwealth, and his cabinet, hnve
tendered their resignations. . -
Sir Ronald Munro Ferguson,' gover
nor general of Australia, immediately
invested ia 0. H. Tudor, leader of the
labor party, and a violent nti-en-scrlptlonist
the task of forming a n.-w
ministry.
It ia believed thut with Mr. Tu.lor
a head of the government, lador
union, which were subjected to a con
siderable set-back during the past sev
eral months, will again spring into ct
istenee with a power at strong as ever.
leaveWdrIFto
WASHINGTON, December 22 (As
sociated Preaa) Lieutenants Lufbury
and Thaw, who have become wort!)
famous for daring air battles aa mem
bers of the Lafayette Escradrille, havo
beeri commissioned with twenty-one
. L I- . I .
ivinora iu mm wvimiun reserve eorps.
This action has been taken upon per
. sonal recommendation from Maj.-flon
J. .1. Perahinjr Lufbury and Th
hnve beeu made majors, while eiht
others are rommissione aa captains.
SUPPOSED FIT IS FOUND .
TO BE JUST PLAIN DRUNK
James Kalanl, Hawaiian, was nicked
ut on the corner of Alakea and Hotel
Street at ten o'clock last night by the
Emergency Ambulance, iu anawer to a
telephone call which gave the man's
ailment aa ahowing every symptom of
a lit. Whan examined at the kAai.itut
it was diacovered that be waa suffering
nuiy rrom an pvcraoae or cneap whi
key, for which he waa given treatment.
Kuluni was not extemled the courtesy
of a bed in the hospital ward but waa
honked for druukeiiueas uud sent to a
cull below.
FIRFS F n 1 1 IV RODIEK IS FINED
a a w waaMwpv:
EXPLOSION in
KROPP PLANTi
UDl LARGE
Flames Are Fought For ' More
Than Two Days-Before They
, Are Finally Checked ( Doing
Damage Jo Sections of Workt
DUTCH' WORKMEN SENT
- AWAY AS FIRST RESlLT
To Replace Submarine Losses
Great Works At Kiel Have Re
' cently Been Quadrupled In Ca
" pacityt Working Incessantly ,
MSTERDAmT- tiecemWrtii
( AssociatAI rrcss) Fol
lowing an' explosion in the clec
trie plant, of the . great . Krupp
armament . works at' K.ssert, as a
result of whiOh seriou s damage,
Vanfli(tei,upoit ma,riy .aectrohs
of the Wojk., tire broke out wliich
further, damagea jht; plant: The
flames were ought for more than
two days before they were finally
checked.. . . ' '
Such is the report telegraphed
to the Telegraaf from a border
point, the news being brought to
Holland by returning, .putch
workmen, who have all been ex
pelled from Kssen and ordered to
return to their homeft Suspicion
for the explosion rests upon, some
of the Dutch workmen..- -
MAIN GUN FACTORY .,
The Kssen plant of the Krttpp
works is the principal one of four,
the three others being at Annen,
Kiel and (Iruson. More than half
the men jn the arriiy of steel
workers employed by Krupps are
it Kssen, where, in time of peace,
24,000 are on the payroll. It is
estimated-tliat this uumber'has
been quadrupled since the out
break of the war . and )he ,vh6Ic
energy pf .the 'natiotThas . been
turned towards' securing, victory.
Any serious disasfe.rjit the Es
sen works wijl handicip( tle army
:n its several . campaigns. : $
SUB LOSSES MANY
A despatch yesterday by way
of Geneva, Switzerland, states
ihat the Krupp works at "Kiel has
ust been quadrupled. ; The an
nouncement in Berne is that Ber
lin has been forced to extraor
dinary efforts to replace the sub
marine losses, which are many
more than the German admiralty
is willing to admit. The capacity
d( the vorks at Kiel is to be tax
el to the limit and the submarine
rampaign is to be coutinued in
the knowledge that atiy cessation
now would be a confession to all
Germany that the most boasted
arm of offense had proven a fail
ure. V'' '-
TOPPED
OTTAWA, Doeembar 23 (AMoelnX-
Press) A tirtishina; ifldw to t ha liquor
lusiuosa was tlvalt bea today with tt)4
iDiiouneomeut from 'remir Robert L.
Borden that tho imuortatioa of iat
eatina liquors will b forbidden a'tar
neit Monday, ,Df mbr 24. Maaufat
ture of IntoxlcanU Is to bo prpbibite-l
iftr a date which is to be fixod later.
G.O. P.
TO
E
FEBRUARY TWELFTH
WASHINGTON, DacamW ' 23
(AwocUUd Proas) William B
Will oo x, chalnnaa of tha KepabUcan
National Commltuo, baa Lamed a
call for a meotlnf of th oommltto
at St. Louis, on February 12. It Is
understood that ha recently Inform
ed friends that business, preasura
makes aa early oonvenlui of the
oommltta desirable. 1
BOOZE IMPORTS IN
CANADaV ARE S
BUI WILL ESCAPE.
; PRISON CELL LIFE
wouri uannoi iee mat violation
r; Was Technical But Extends
. ' Leniency To Guilty Man
WiLtlNGNESSraGIVE
. v -TESTIMONY REWARDED
Sari Francisco Report Says That
Prisoner Will Return To Ho
; nolulu On Release ''
?PAN" FRANCIHCO, tVrmbr S2
(Aiisorlatfd lrsa) (Vsora; Rodiefc,
fornrar . (?erma "consul at "Hoaololu,
vrhe pleaded guilty in th Hindu eon
ittlra'ea's waa flnsd ,10,000 today
by the rourt. Thi also rarrlfs with It
loss of rights aa aeltUew. Bodlh aa
aoanrd that he would 1 pay the In
promptly. : " "
Thf aeatt-nce of H. A. Sehroeder,
formarly Rixlick ' secretary", who also
pleaded Rnllty, was continued until Jan
asry to give liim an opportunity to
testify' for the government.
Oharaetar' Wltaeeaea Heard V
'.R. ., W',- ShinRle, president of!the
Watefhous Trust' Company Klehard
Iyfrs. ylee nrealdeut of Brewer ft Co.,
and , E." K. Htackable, former collector
of customs her, appeared as character
witaeM, In" Rodiek'a behalf. U.-8.
IHstrirtt' Attorney Prestoo, government I
prosecutor, aaaea ror leniency roc
Bodies: in, visw of hia willingness to
testify, for the government.
Judge Van Fleet said he did' net
doubt .Bodlek' business probity bat
that nevertheless Rodiek mast be made
to feel the results of hi act of treach
ery and said he must Ignore the elaim
that the acts admitted were purely a
techaicai violation of the neutrality
law. 7.. v. -
Escape Prison Sentanca.' '- V ,'
The Voart in pronoauelnr sentence
declared that sworn allegUtei to the
1'aited States calls for th highest de
gree of fidelity and added that he be
lieved Rodiek to be of genteel birth,
and. therefore did not desire to hnmlt
lute hint or: add to his physical suffer
ing by sentencing him to imprisonment.
Explains DUry Entries
Pftor ie'beini; sentenced Rodiek took
the Stand declaring that he desired to
explsia. eertaia passage appearing is
thw iiayi f Capt. Karl Oraaahof, com
mander of , th Oermaa gunboat Oder,
lateraed ia Honolulu harbor at theout
break Of the war. la 1914, .
With reference- to the intimation ia
th tteryt that Rodiek facilitated the
illegal shipment of munition. Rodiek
explained , his visit te the Oeier after
th acts of sabotage by saying that
Orasshof had , asked him to note offi
cially the condition of the vessel,
!t is understood that immediately af
tf his release Hodlek .wJM return to
Honolulu. . v.... , . .
JAPAN DENIES ANY
Despite- Reports Government
; Says No Forces Sent and
v,: No Mobifization Started
'' '; : '
TOKICs December 23-r-(Asociateil
Press) Japaa has not moved any of
hej- military or naval forces to either
Harbia or Vladivostok, deaplt all the
reporta that have been sent out to the
contrary," .The, ' Jnpnncse government
haa not mobilised any portion of her
force, nof despatched any troops into
China, Manchuria or Hiberia, nois has
the government any intention o mo
bilising any , portion of her army for
despatch t any point.
Huch ia the emphatie statement an
thorised by government officials : here
yesterday, who state that the reports
of troop movements and of mobuira
tlona are wholly unfounded.
. (
ALIEN ENEMIES TO -
' ; BE WARNED AGAIN
This Time When Told To Obey
the Law, It Will Be Serious
tfilvl Vheaiiee- ia Honolulu are to b.
warned for the second time to kec
"af, 'p. th waterfront, especiullj
ahen a ship i in port. Shortly after
the declaration of war with 'uermany.
Instructions to this effect, were givei
Jl,who registered at the federal of
Gi'f but It is admitted that they have
not been strictly lived up to. ,,
-United Wate Marshal J. J. Hmiddy
i id yesterday that although plain
ilotae mcu patrol the piers when ships
are at anchor, certain 'alien snemies
are known to have paid visit to in
somhig and outgoing steamer to bid
slqlia to their friends.
Action .of this kind may be purely
harmless, but it is a violation of the
law. hmiddy remarked, and after the
holiday he intends to Issue a final
warning to all enemy aliens to Veep
clear of prohibited areas, and especial
ly the wharves.
, The law specifically states that sub
jects Sf a country -which is at' war with
the United Btates must not eater those
sreas where they might be able to do
any damage, and federal officials are
determined to put an immediate end
to' this indscrlmate prowling.
. This second, wsrnisg will be the lust
given,, said "Marshal Hmiddy, and those
who Spain ; violate the law will have
to stahd the consequences.
- TROOPS IN SIBERIA
IM1IIN PENALTY
CONSPIRATOR
BY COURT
Albert Kaltschmidt Must Serve
' Four Years In Prison and ;
Pay Heavy Fine Also
CO CONSPIRATORS WILL
ALSO LIVE IN CELLS
Federal Judge Scathingly Con
demns Prisoners Convicted of
Conspiracies To Wreck
fKTROIT, Peceiubr 23 (Assoeint
ed Press) Kontcn.e la the , extreme
penalty Of the law was pronounced up
on Albert Kaltschmidt in th federal
court yesterday and sentences hardly
less sever were given to hi co-dgfeed
anta who were found guilty by the jury
yesterday of widespread 'plot . de
stroy tunnels,, railroad equipment and
other property in tho United Bfates
and Canada and of conspiracy to vio
late the neutrality of the United States
in connection with the plots that were
to be carried out in the' Dominion of
Canada.
Given Maximum Penalty
Kaltachnridt was sentenced on three
counts of the indictment, under which
he wss convicted. The maximum ces
tencs Imposed upon him waa four years
imprisonment in a federal penitentiary
aad the payment . of a Id of 120,000.
wr tne otner convicted defendants
Mrs. ! Ida Kaltarhmldt Nee t . reserved
the -hiost severe punishnsent, her en
tencs. helng three years imprisonment
aad a nns of S 15,000. Her husbaad,
Frits A. Neef, manager of a Detroit
electrical concern, and Karl Schmidt
and Maria Bchmidt, his wife, were each
sentenced to serve prison term of two
year and to pay fines of $0,0O0 respec
tively.' . , '.
8cathlnly Oondemnsd . :' f
- Is sentencing the prisoners th court
Jwelt long and seriously . upon the
rime of which they had been esavlct
ed, the scjioua loss of life that might
have resulted aad the magnitude ol
their offenses against the United Bute
government, in using it oil a a place
for planning war plots against a. na
tion with which the United Htates was
on terms of friendship. The judge said
that it was evident that , Albert Kalt
schmidt was head of the plotters aa
well a head of the-family and that
he would therefore giva hint th most
severs sentence, regreting that the law
did not permit him to impose a heavier
penalty. He then pronounceA, sentence
upon each ia turn.. ' i-,
RESPONSIBILITY FOR
President of Road Offers To
Make Qenerat Settlement' r-
; '.. . ', ..
LOUWVIIXE, Kentucky, Deeember
23 ( Associated Pl-esa) President
Smith of the Louisville ana Nashville
Railroad acknowledge the legal respon
sibilities: of the lis fof-ths wreoh.of
Friday, la which a large number of
passengers vre killed aad wounded
A a r-at nil collision acar fcheparda
ville, )U suggest that those who hsve
claims; against the., line submit, these
to a commute for adjudication thus
avoiding unnecessary litigation., v....
BADLY DEMORALIZED
NEW YORK, December S3 (Asso
ciated Press) Complete demoralisa
tion of the printing ink industry it
threatened by .the import embargo -de
clared upon carbon black, ana of th
principal ingredients of ' black ink;
states t nuip nuxron, .president of the
National Association of .Printing Ink
Msnufacturers. Mr. Buxton urge the
import board to reconsider its action
In declaring carbon black an unnece
satuPrtMul ' .
No Rest For Thai
Housework Is too hard for a woman
who is half sick, nervous and always
tired. But i kevps piling up, and gives
weak kidneys no time to rseover. If
vour back is lame aad schy aud your
kidneys irregular; if you hsve "blue
pells," sick headaches, nervousness,
iisainess ana rneumatie pain, use
Doan's Backache Kidney Pills. They
have dons wonder for thouaanda of
women worn out with weak kidneys.
'When Your Back ia Ijime Kemem
ber the Name." (Dont simply auk for
a kidney remedy aak distinctly for
1 Doan'a Backache Kidney Till and take
no other). l)oa's Hack ache Kidney
Pills are sold by all druggists and stoie
, keepers, or will be mailed on receipt or
price by the llnllister Dru Co, )
Vuimi! . Hp, 'th A Po., Itll for th
I Hawaiian Islands, (Advrrtuvmeut i
FO
ORDERED
s Aching Back
EXPLosion i;:
SACRAIEIITR
GENERAL M.
Police Claim To Have Evidence
In Statement Emanating Frp
Headquarters of I n d ustn:. I
Workers of World , ,
THIRTY ARRESTED AND
, WILL BE INVESTIGATED
Two ire Found Carrying Dyna
mite and One With Bad Rec
ord Is Wanted , In Several
States For Similar Plots
SACRAMENTO, December 2.1
'-(Associated Pressl Grow
ing out of the blowing up of the
xecuttve mansion ' and the en
dangering of the lives of the gov
ernor and his family haye come
developments that point to an ex
tensive plot for wholesale dyna
miting, nave led to the arrest ofr
thirty men and the probable ar
rest and prosecution of a number
of others 'either directly pr less
closely, connected with the I. V.
W. The attempt to kilt the gov
ernor and the other alleged plans
are directly attributed to the I.
w.' -.; ...::'.''.
EXTENSIVE PLANS '
' Evidence secured by .the 'police
alleges that a ' statement was
made at the I. W. V. headquar
ters here that the dynamiting of
the governor s home was planned
and carried out by, that organiza
tion and was to be merely the
first of a series of other dynamite
explosions. Included among these
was a plan to destroy the great
oower olant of the Pacific rie-r".
trfc CbmpanyXlthus'.'" thfowing '
7 ' T. V-i , .
, , , . . ( T , .
the motive power, from a larce
una imu uuiKiicss aiiu rikinr
number of important, industries.
WHOLESALE ARRESTS ,
y- As. result of this evidence the
olice have made thirty, arrests
ind, it is asserted, two of those
arrested .were found carrying a
box contftinine dvnamite and will
be charged, in the first instance
with having the explosive unlaw
fully iiAheir possession. ' V, ) -One
of these men. cave bis
name as 1 William" I Idod but de
tectives at. headquarters say that
! .- c t '
.ma m umy vnc oi a numoer Of
aliases which he has worn that be
is "wanted" in several states in
different parts of the country, be
ing sought ir connection with a
number of. dynamite outrages,
and explosion plots that were dis
covered in time to prevent actual
damage.V, - '. ':;'J
All ;of thdse arrested are held
Tor jhvtihtieation! . '
" NATIONWIDE PLOT
Policevofricials hint that -while
the evidence they have jrotnts di
rect! v -bnl V to olots ' at H'vnn m i t.
mg in trtis state there are indica
tions that this is but a part of a
nationwide plot on the part of the
J, V . W, to damage industrial
plants of the countrv ho aa tr
hinder and retard the hatiort in
if rVla(tllt sf K'ui ,
COMMITTEE OF SENATE
. - ( ..,' ' v r :.
' WA8HINQT0N, Decemlr 22 ( Aa
aociated Tre) Eleven Wester let
agar producers were called today tu
testly before the' sruat sub-eossmjt
te lnvetlating the sm;ar ahortage,.
Food Administrator Hoover was ask-,
ed to appear but did not testify, be-
1 ' 1 .11 - 1.
ing (icuwu uuni a luier uaie. .
, i ' ' ' ' i '.. ...
HUN RUTHLESSNESS TAKES
LIVES OF NINETY-FIVE
LONDON', December S2( Associate I
I'rcks) The British armed steamship
Htephen Kurthues has bee a auuk by
a siitimarine is the Irish t-hauaal
'a loss of 95 livas. ' ' ,
xml | txt
|
http://chroniclingamerica.loc.gov/lccn/sn83025121/1917-12-25/ed-1/seq-5/ocr/
|
CC-MAIN-2017-04
|
refinedweb
| 4,637
| 69.41
|
Collection.
15 October 2014
Contents
- First encounters
- Defining Collection Pipeline
- Exploring more pipelines and operations
- Alternatives
- Nested Operator Expressions
- Laziness
- Parallelism
- Immutability
- When to Use It
Sidebars
The collection pipeline is one of the most common, and pleasing, patterns in software. It's something that's present on the unix command line, the better sorts of OO languages, and gets a lot of attention these days in functional languages. Different environments have slightly different forms, and common operations have different names, but once you get familiar with this pattern you don't want to be without it.
First encounters
I first came across the collection pipeline pattern when I started with Unix. For example, let's imagine I want to find all my bliki entries that mention "nosql" in the text. I can do this with grep:
grep -l 'nosql' bliki/entries
I might then want to find out how many words are in each entry
grep -l 'nosql' bliki/entries/* | xargs wc -w
and perhaps sort them by their word count
grep -l 'nosql' bliki/entries/* | xargs wc -w | sort -nr
and then just print the top 3 (removing the total)
grep -l 'nosql' bliki/entries/* | xargs wc -w | sort -nr | head -4 | tail -3
Compared with other command line environments I'd come accross before (or indeed later) this was extremely powerful.
At a later date I found the same pattern when I started using
Smalltalk. Lets imagine I have a collection of article objects (in
someArticles) each
of which has a collection of tags and a word count. I can select
only those articles that have the
#nosql tag with
someArticles select: [ :each | each tags includes: #nosql]
The
select method takes a single argument
Lambda (defined by the square brackets, and called a
"block" in smalltalk) which defines a boolean function which it
applies every element in someArticles and returns a collection of
only those articles for which the lambda resolves as true.
To sort the result of that code, I expand the code.
(someArticles select: [ :each | each tags includes: #nosql]) sortBy: [:a :b | a words > b words]
The
sortBy method is another method that takes a
lambda, this time the code used to sort the elements. Like
select it returns a new collection so I can continue
the pipeline
((someArticles select: [ :each | each tags includes: #nosql]) sortBy: [:a :b | a words > b words]) copyFrom: 1 to: 3
The core similarity to the unix pipeline is that each of the
methods involved (
sortBy, and
copyFrom) operate on a collection of records and return
a collection of records. In unix that collection is a stream with
the records as lines in the stream, in Smalltalk the collection is
of objects, but the basic notion is the same.
These days, I do much more programming in Ruby, where the syntax makes it nicer to set up a collection pipeline as I don't have to wrap the earlier stages of the pipeline in parentheses
some_articles .select{|a| a.tags.include?(:nosql)} .sort_by{|a| a.words} .take(3)
Forming a collection pipeline as a method chain is a natural approach when using an object-oriented programming language. But the same idea can be done by nested function invocations.
To go back to some basics, let's approach how you might set up a
similar pipeline in common lisp. I can store each article in a
structure called
articles, which allows me to access the
fields with functions named like
article-words and
article-tags. The function
some-articles
returns the ones I start with.
The first step is to select only the nosql articles.
(remove-if-not (lambda (x) (member 'nosql (article-tags x))) (some-articles))
As with the case with Smalltalk and Ruby, I use a function
remove-if-not that takes both the list to operate on
and a lambda to define the predicate. I can then expand the
expression to sort them, again using a lambda
(sort (remove-if-not (lambda (x) (member 'nosql (article-tags x))) (some-articles)) (lambda (a b) (> (article-words a) (article-words b))))
I then select the top 3 with
subseq.
(subseq (sort (remove-if-not (lambda (x) (member 'nosql (article-tags x))) (some-articles)) (lambda (a b) (> (article-words a) (article-words b)))) 0 3)
The pipeline is there, and you can see how it builds up pretty nicely as we go through it step by step. However it's questionable as to whether its pipeline nature is clear once you look at the final expression [1]. The unix pipeline, and the smalltalk/ruby ones have a linear ordering of the functions that matches the order in which they execute. You can easily visualize the data starting at the top-left and working its way right and/or down through the various filters. Lisp uses nested functions, so you have to resolve the ordering by reading from deepest function up.
The popular recent lisp, Clojure, avoids this problem allowing me to write it like this.
(->> (articles) (filter #(some #{:nosql} (:tags %))) (sort-by :words >) (take 3))
The "->>" symbol is a threading macro, which uses lisp's powerful syntactic macro capability to thread the result of each expression into an argument of the next expression. Providing you follow conventions in your libraries (such as making the subject collection the last argument in each of the transformation functions) you can use this to turn a series of nested functions into a linear pipeline.
For many functional programmers, however, using a linear approach like this isn't something important. Such programmers handle the depth ordering of nested functions just fine, which is why it took such a long time for an operator like "->>" to make it into a popular lisp.
These days I often hear fans of functional programming extoll the virtues of collection pipelines, saying that they are a powerful feature of functional languages that OO languages lack. As an old Smalltalker I find this rather annoying as Smalltalkers used them widely. The reason people say that collection pipelines aren't a feature of OO programming is that the popular OO languages like C++, Java, and C# didn't adopt Smalltalk's use of lambdas, and thus didn't have the rich array of collection methods that underpin the collection pipeline pattern. As a result collection pipelines died out for most OO programmers. Smalltalkers like me cursed the lack of lambdas when Java became the big thing in town, but we had to live with it. There were various attempts to build collection pipelines using what we could in Java; after all, to an OOer, a function is merely a class with one method. But the resulting code was so messy that even those familiar with the technique tended to give up. Ruby's comfortable support for collection pipelines was one of the main reasons I started using Ruby heavily around 2000. I'd missed things like that a lot from my Smalltalk days.
These days lambdas have shaken off much of their reputation for being an advanced and little-usable language feature. In mainstream language C# has had them for several years, and even Java has finally joined in. [2] So now collection pipelines are viable in many languages.
Defining Collection Pipeline
I consider Collection Pipeline to be a pattern of how we can modularize and compose software. Like most patterns, it pops up in lots of places, but looks superficially different when it does so. However if you understand the underlying pattern, it makes it easy to figure out what you want to do in a new environment.
A collection pipeline lays out a sequence of operations that pass a collection of items between them. Each operation takes a collection as an input and emits another collection (except the last operation, which may be a terminal that emits a single value). The individual operations are simple, but you can create complex behavior by stringing together the various operations, much as you might plug pipes together in the physical world, hence the pipeline metaphor.
Collection Pipeline is a special case of the Pipes and Filters
pattern
. The filters in Pipes and Filters correspond to the
operations in Collection Pipeline, I replace "filter" with
operation because filter is a common name for one of the kinds of
operations in a Collection Pipeline. From another perspective, the
collection pipeline is a particular, but common, case of composing higher-order
functions, where the functions all act on some form of sequence
data structure.
The operations and the collections that are passed between the operations take different forms in various contexts.
In Unix the collection is a text file whose items are the lines in the file. Each line contains various values, separated by whitespace. The meaning of each value is given by its ordering in the line. The operations are unix processes and collections are composed using the pipeline operator with the standard output of one process piped to the standard input of the next.
In an object-oriented program the collections are a collection class (list, array, set, etc). The items in the collection are the objects within the collection, these objects may be collections themselves or contain more collections. The operations are the various methods defined on the collection class itself - usually on some high level superclass. The operations are composed through a method chain.
In a functional language the collections are collections in a similar way to that of an object-oriented language. However the items this time are generic collection types themselves - where an OO collection pipeline would use objects, a functional language would use a hashmap. The elements of the top level collection may be collections themselves and the hashmap's elements may be collections - so like the object-oriented case we can have an arbitrarily complex hierarchical structure. The operations are functions, they may be composed either by nesting or by an operator that's capable of forming a linear representation, such as Clojure's arrow operators.
The pattern pops up in other places too. When the relational model was first defined it came with a relational algebra which you can think of as a collection pipeline where the intermediate collections are constrained to be relations. But SQL doesn't use the pipeline approach, instead using an approach that's rather like comprehensions (which I'll discuss later.)
The notion of a series of transformations like this is a common approach to structuring programs - hence the harvesting of the Pipes and Filters architectural pattern. Compilers often work this way, transforming from source code, to syntax tree, through various optimizations, and then to output code. The distinguishing thing about a collection pipeline is that the common data structure between stages is a collection, which leads to a particular set of common pipeline operations.
Exploring more pipelines and operations
The one example pipeline I've used so far just uses a few of the operations that are common in collection pipelines. So now I'll explore more operations with a few examples. I'll stick with ruby, as I'm more familiar with that language these days, but the same pipelines can usually be formed in other languages that support this pattern.
Getting total word counts (map and reduce)
Two of the most important pipeline operations can be explained with a simple task - how to get a total word count for all the articles in the list. The first of these operations is map, this returns a collection whose members are the result of applying the given lambda to each element in the input collection.
[1, 2, 3].map{|i| i * i} # => [1, 4, 9]
So if we use this we can transform a list of articles into a list of the word counts for each article. At this point we can then apply one of more awkward collection pipeline operations: reduce. This operation reduces an input collection into a single result. Often any function that does this is referred to as a reduction. Reductions often reduce to a single value, and then can only appear as the final step in a collection pipeline. The general reduce function in ruby takes a lambda which has two variables, the usual one for each element and another accumulator. At each step in the reduction it sets the value of the accumulator to result of evaluating the lambda with the new element. You can then sum a list of numbers like this
[1, 2, 3].reduce {|acc, each| acc + each} # => 6
With these two operations on the menu, calculating the total word count is a two-step pipeline.
some_articles .map{|a| a.words} .reduce {|acc, w| acc + w}
The first step is the map that transforms the list of articles into a list of word counts. The second step runs a reduction on the list of word counts to create a sum.
You can pass functions into pipeline operations either as lambdas or by the name of a defined function
At this point, it's worth mentioning that there are a couple of different ways that you can represent the functions that make up steps in a collection pipeline. So far, I've used a lambda for each step, but an alternative is to just use the name of the function. Writing this pipeline in clojure, the natural way to write it would be
(->> (articles) (map :words) (reduce +))
In this case, just the names of the relevant functions are
enough. The function passed to map is run on each element of the
input collection, and the reduce is run with each element and an
accumulator. You can use the same style with ruby too, here
words is a method that's defined on each object in
the collection. [3]
some_articles .map(&:words) .reduce(:+)
In general, using the name of a function is a bit shorter, but you're limited to a simple function call on each object. Using lambdas gives you a bit more flexibility, for a bit more syntax. When I program in Ruby I tend to prefer using a lambda most of the time, but if I were working in Clojure I'd be more inclined to use function names when I can. It doesn't matter greatly which way you go. [4]
Getting the number of articles of each type (group-by)
For our next pipeline example, lets figure out how many articles there are by each type. Our output is a single hashmap whose keys are the types and values are the corresponding number of articles. [5]
To pull this off, we first need to group our list of articles by the article's type. The collection operator to work with here is a group-by operation. This operation puts the elements into a hash indexed by the result of executing its supplied code on that element. I can use this operation to divide the articles into groups based on how many tags they have.
some_articles .group_by {|a| a.type}
All I need to do now is get a count of the articles in each group. On the face of it, this is a simple task for the map operation, just running a count on the number of articles. But the complication here is that I need to return two bits of data for each group: the name of the group and the count. A simpler, although connected problem is that the map example we saw earlier uses a list of values, but the output of the group-by operation is a hashmap.
It's often useful to treat hashmaps as lists of key-value pairs.
This issue is a common one in collection pipelines once we've
gone past the simple unix example. The collections that we may
pass around are often lists, but can also be hashes. We need to
easily convert between the two. The trick to doing so is to
think of a hash as a list of pairs - where each pair is the key
and corresponding value. Exactly how each element of a hash is
represented varies from language to language, but a simple (and
common) approach is to treat each hash element as a two-element
array:
[key, value].
Ruby does exactly this and also allows us to turn an array of
pairs into a hash with the
to_h method. So we can
apply a map like this
some_articles .group_by {|a| a.type} .map {|pair| [pair[0], pair[1].size]} .to_h
This kind of bouncing between hashes and arrays is quite common with collection pipelines. Accessing the pair with array lookups is a bit awkward, so Ruby allows us to destructure the pair into two variables directly like this.
some_articles .group_by {|a| a.type} .map {|key, value| [key, value.size ]} .to_h
Destructuring is a technique that's common in functional programming languages, since they spend so much time passing around these list-of-hash data structures. Ruby's desctructuring syntax is pretty minimal, but enough for this simple purpose.
Doing this in clojure is pretty much the same: [6]
(->> (articles) (group-by :type) (map (fn [[k v]] [k (count v)])) (into {}))
Getting the number of articles for each tag
For the next pipeline, we'll produce article and word counts for each tag mentioned in the list. Doing this involves a considerable reorganization of the collection's data structure. At the moment our top level item is an article, which may contain many tags. To do this we need to unravel the data structure so our top level is a tag that contains multiple articles. One way of thinking about this is that we're inverting a many-to-many relationship, so that the tag is the aggregating element rather than the article.
This example inverts a many-to-may relationship
This reorganizing of the hierarchical structure of the collection that starts the pipeline makes for a more complicated pipeline, but is still well within the capabilities of this pattern. With something like this, it's important to break it down into small steps. Transformations like this are usually much easier to reason about when you break the full transformation down into little pieces and string them together - which is the whole point of the collection pipeline pattern.
The first step is to focus on the tags, exploding the data structure so that we have one record for each tag-article combination. I think of this is rather like how you represent a many-to-many relationship in a relational database by using an association table. To do this I create a lambda that takes an article and emits a pair (two element array) for each tag and the article. I then map this lambda across all of the articles.
some_articles .map {|a| a.tags.map{|tag| [tag, a]}}
which yields a structure like this:
[ [ [:nosql, Article(NoDBA)] [:people, Article(NoDBA)] [:orm, Article(NoDBA)] ] [ [:nosql, Article(Infodeck)] [:writing, Article(Infodeck)] ] # more rows of articles ]
The result of the map is a list of lists of pairs, with one nested list for each article. That nested list is in the way, so I flatten it out using the flatten operation.
some_articles .map {|a| a.tags.map{|tag| [tag, a]}} .flatten 1
yielding
[ [:nosql, Article(NoDBA)] [:people, Article(NoDBA)] [:orm, Article(NoDBA)] [:nosql, Article(Infodeck)] [:writing, Article(Infodeck)] # more rows of articles ]
This task of generating a list with unnecessary level of nesting that needs to be flattened out is so common that most languages provide the flat-map operation to do this in a single step.
some_articles .flat_map {|a| a.tags.map{|tag| [tag, a]}}
One we have a list of pairs like this, then it's a simple task to then group it by the tag
some_articles .flat_map {|a| a.tags.map{|tag| [tag, a]}} .group_by {|pair| pair.first}
yielding
{ :people: [ [:people, Article(NoDBA)] ] :orm: [ [:orm, Article(NoDBA)] [:orm, Article(OrmHate)] ] :writing: [ [:writing, Article(Infodeck)] ] # more records }
But like with our first step, this introduces an annoying extra level of nesting, because the value of each association is a list of key/article pairs rather than just a list of articles. I can trim this out by mapping a function to replace the list of pairs with a list of articles.
some_articles .flat_map {|a| a.tags.map{|tag| [tag, a]}} .group_by {|pair| pair.first} .map {|k,pairs| [k, pairs.map {|p| p.last}]}
this yields
{ :people: [ Article(NoDBA) ] :orm: [ Article(NoDBA), Article(OrmHate) ] :writing: [ Article(Infodeck) ] # more records }
Now I've reorganized the basic data to articles for each tag, reversing the many-to-many relationship. To produce the required results all I need is a simple map to extract the exact data I need
some_articles .flat_map {|a| a.tags.map{|tag| [tag, a]}} .group_by {|pair| pair.first} .map {|k,pairs| [k, pairs.map {|p| p.last}]} .map {|k,v| [k, {articles: v.size, words: v.map(&:words).reduce(:+)}]} .to_h
This yields the final data structure which is a hash of hashes.
:nosql: :articles: 4 :words: 3906 :people: :articles: 1 :words: 561 :orm: :articles: 2 :words: 2279 :writing: :articles: 1 :words: 1145 :ruby: :articles: 1 :words: 1313 :ddd: :articles: 1 :words: 482
Doing the same task in Clojure takes the same form.
(->> (articles) (mapcat #(map (fn [tag] [tag %]) (:tags %))) (group-by first) (map (fn [[k v]] [k (map last v)])) (map (fn [[k v]] {k {:articles (count v), :words (reduce + (map :words v))}})) (into {}))
Clojure's flat-map operation is called mapcat.
Building up a more complicated pipeline like this can be more of a struggle than the simple ones I've shown earlier. I find it's easiest to carefully do each step at a time, looking carefully at the output collection from each step to ensure it's in the right shape. Visualizing this shape usually requires some form of pretty-printing to display the collection's structure with indentation. It's also useful to do this in a rolling test-first style, writing the test initially with some simple assertion for the shape of the data (such as just the number of records for the first step) and evolving the test as I add extra stages to the pipeline.
The pipeline I have here makes sense of building it up from each stage, but the final pipeline doesn't reveal too clearly what's going on. The first stages are really all about indexing the list of articles by each tag, so I think it reads better by extracting that task into its own function.
(defn index-by [f, seq] (->> seq (mapcat #(map (fn [key] [key %]) (f %))) (group-by first) (map (fn [[k v]] [k (map last v)])))) (defn total-words [articles] (reduce + (map :words articles))) (->> (articles) (index-by :tags) (map (fn [[k v]] {k {:articles (count v), :words (total-words v)}})) (into {}))
I also felt it was worth factoring the word count into its own function. The factoring adds to the line-count, but I think I'm always happy to add some structuring code if it makes it easier to understand. Terse, powerful code is nice - but terseness is only valuable in the service of clarity.
To do this same factoring in an object-oriented language like Ruby, I
need to add the new
index_by method to the collection class itself, since I can
only use the collection's own methods in the pipeline. With Ruby I can
monkey-patch Array to do this
class Array def invert_index_by &proc flat_map {|e| proc.call(e).map {|key| [key, e]}} .group_by {|pair| pair.first} .map {|k,pairs| [k, pairs.map {|p| p.last}]} end end
I've changed the name here because the simple name "index_by" makes sense in the context of a local function, but doesn't make so much sense as a generic method on a collection class. Needing to put methods on the collection class can be a serious downside of the OO approach. Some platforms don't allow you to add methods to a library class at all, which rules out this kind of factoring. Others allow you to modify the class with monkey patching like this, but this causes a globally visible change to the class's API, so you have to think more carefully about it than a local function. The best option here is to use mechanisms like C#'s extension methods or Ruby's refinements that allow you to change an existing class, but only in the context of a smaller namespace.
Once I have that method defined, I can factor the pipeline in a similar way to the clojure example.
total_words = -> (a) {a.map(&:words).reduce(:+)} some_articles .invert_index_by {|a| a.tags} .map {|k,v| [k, {articles: v.size, words: total_words.call(v)}]} .to_h
Here I also factored out the word counting function like I did for the Clojure case, but I find the factoring less effective in ruby since I have to use an explicit method to call the function I created. It's not much, but it does add a bit of friction to the readability. I could make it a full method, of course, that would get rid of the call syntax. But I'm tempted to go a bit further here and add a class to contain the summary functions.
class ArticleSummary def initialize articles @articles = articles end def size @articles.size end def total_words @articles.map{|a| a.words}.reduce(:+) end end
Using it like this
some_articles .invert_index_by {|a| a.tags} .map {|k,v| [k, ArticleSummary.new(v)]} .map {|k,a| [k, {articles: a.size, words: a.total_words}]} .to_h
Many people would feel it too heavyweight to introduce a whole new class just to factor out a couple of functions in a single usage. I have no trouble introducing a class for some localized work like this. In this particular case I wouldn't, since it's really only the total words function that needs extracting, but I'd only need a little bit more in the output to reach for that class.
Alternatives
The collection pipeline pattern isn't the only way to accomplish the kinds of things I've talked about so far. The most obvious alternative is what most people would have usually used in these cases: the simple loop.
Using Loops
I'll compare ruby versions of the top 3 NoSQL articles.
The collection pipeline version is slightly shorter, and to my eyes clearer, primarily because the pipeline notion is one that's familiar and naturally clear to me. That said, the loop version isn't that much worse.
Here's the word count case.
The Group case
The article count by tag
In this case the collection pipeline version is much shorter, although it's tricky to compare since in either case I'd refactor to bring out the intention in either case.
Using Comprehensions
Some languages have a construct for comprehensions, usually called list comprehensions, which mirror simple collection pipelines. Consider retrieving the titles of all articles that are longer than a thousand words. I'll illustrate this with coffeescript which has a comprehension syntax, but can also use Javascript's own ability to create collection pipelines.
The exact capabilities of a comprehension differ from language to language, but you can think of them as particular sequence of operations that can be expressed in a single statement. This way of thinking of them illuminates the first part of the decision of when to use them. Comprehensions can only be used for certain combinations of pipeline operations, so they are fundamentally less flexible. That said, comprehensions are defined for the most common cases, so they are still an option in many cases.
Comprehensions can usually be placed in a pipeline themselves - essentially acting as a single operation. So to get the total word count of all articles over 1000 words I could use:
The question then is whether comprehensions are better than pipelines for the cases that they work in. Fans of comprehensions say they are, others might say that pipelines are just as easy to understand and more general. (I fall into the latter group.)
Nested Operator Expressions
One of useful things you can do with collections is manipulate them with set operations. So lets assume I have I'm looking at a hotel with functions to return that are red, blue, at the front of the hotel, and occupied. I can then use an expression to find the unoccupied red or blue rooms at the front of the hotel.ruby…
(front & (red | blue)) - occupiedclojure…
(difference (intersection (union reds blues) fronts) occ)
Clojure defines set operations on its set datatype, so all the symbols here are sets.
I can formulate these expressions as collection pipelinesruby…
red .union(blue) .intersect(front) .diff(occupied)
I monkey-patched Array to add the set operations as regular methodsclojure…
(->> reds (union blues) (intersection fronts) (remove occ))
I need clojure's 'remove' method here in order to get the arguments in the right order for threading.
But I prefer the nested operator expression forms, particularly when I can use infix operators. And more complicated expressions could get really tangled as pipe.
That said, it's often useful to throw a set operation in the middle of a pipeline. Let's imagine the case where the color and location of a room are attributes of the room record, but the list of occupied rooms is in a separate collection.ruby…
rooms .select{|r| [:red, :blue].include? r.color} .select{|r| :front == r.location} .diff(occupied)clojure…
(->> (rooms) (filter #( #{:blue :red} (:color %))) (filter #( #{:front} (:location %))) (remove (set (occupied))))
Here I'm showing
(set
(occupied)) to show how we'd use a set wrapped over a
collection as a predicate for the set membership in clojure.
While infix operators are good for nested operator expressions, they don't work well with pipelines, forcing some annoying parentheses.ruby…
((rooms .select{|r| [:red, :blue].include? r.color} .select{|r| :front == r.location} ) - occupied) .map(&:num) .sort
Another point to bear in mind with set operations is that collections are usually lists which are ordered and allow duplicates. You have to look at the particulars of your library to see what means for set operations. Clojure forces you to turn your lists into sets before using set operations on them. Ruby will accept any array into its set operators but removes duplicates on its output while preserving the input ordering.
Laziness
The concept of laziness came from the functional programming world. The motivation may be some code like this:
large_list .map{|e| slow_complex_method (e)} .take(5)
With such code, you would spend a lot of time evaluating
slow_complex_method on lots of elements and then
throw away all the results except the top 5. Laziness allows the
underlying platform to determine that you only need the top five
results, and then only to perform
slow_complex_method on the ones that are needed.
Indeed this goes further into the runtime usage, let's
imagine the result of
slow_complex_method is piped
into a scrolling list on a UI. A lazy pipeline would only invoke
the pipeline on elements as the final results scroll into view.
For a collection pipeline to be lazy, the collection pipeline functions have to be built with laziness in mind. Some languages, commonly functional languages like Clojure and Haskell, do this right from the start. In other cases laziness can be built into a special group of collection classes - Java and Ruby have some lazy collection implementations.
Some pipeline operations cannot work with laziness and have to evaluate the whole list. Sorting is one example, without the entire list you cannot determine even a single top value. Platforms that take laziness seriously will usually document operations that are unable to preserve laziness.
Parallelism
Many of the pipeline operations naturally work well with parallel invocation. If I use map the results of using it for one element don't depend on any of the other elements in the collection. So if I'm running on a platform with multiple cores, I can take advantage of that by distributing the map evaluations across multiple threads.
Many platforms include the ability to distribute evaluations in parallel like this. If you're running a complex function over a large set, this can result in a significant performance boost by taking advantage of multicore processors.
Parallelizing, however, doesn't always boost performance.
Sometimes it takes more time to set up the parallel distribution
than it you gain the from the parallelism. As a result most
platforms offer alternative operations that explicitly use
parallelism, such as how Clojure's
pmap function is a
parallel version of map. As with any performance optimization, you
should use performance tests to verify whether using a
parallelizing operation actually provides any performance improvement.
Immutability
Collection-pipelines naturally lend themselves to immutable data structures. When building a pipeline it's natural to consider each operation as generating a new collection for its output. Done naively this involves a lot of copying, which can lead to problems with large amounts of data. However, most of time, it's not a problem in practice. Usually it's rather smaller sets of pointers that are copied rather than large hunks of data.
When it does become a problem then you can retain immutability with data structures that are designed to be transformed in this way. Functional programming languages tend to use data structures that can efficiently be manipulated in this style.
If necessary you can sacrifice mutability by using operations that update a collection rather than replacing it. Libraries in non-functional languages often offer destructive versions of the collection pipeline operators. I would strongly advise that you only use these as part of a disciplined performance tuning exercise. Start working with the non-mutating operations and only use something else when you have a known performance bottleneck in the pipeline.
When to Use It
I see Collection Pipeline as a pattern, and with any pattern there are times you should use it, and times when you should take another route. I always get suspicious if I can't think of reasons not to use a pattern I like.
The first indication to avoid it is when the language support isn't there. When I started with Java, I missed being able to use collection pipelines a lot, so like many others I experimented with making objects that could form the pattern. You can form pipeline operations by making classes and using things like anonymous inner classes to get close to lambdas. But the problem is the that syntax is just too messy, overwhelming the clarity that makes collection pipelines so effective. So I gave up and used loops instead. Since then various functional-style libraries have appeared in java, many using annotations which weren't in the language in the early days. But my feeling remains that without good language support for clean lambda expressions, this pattern usually ends up being more trouble than is worth it.
Another argument against is when you have comprehensions. In that case comprehensions are often easier to work with for simple expressions, but you still need pipelines for their greater flexibility. Personally I find simple pipelines as easy to understand as comprehensions, but that's the kind of thing a team has to decide in its coding style.
extract a method whenever there a difference between what a block of code does and how it does it
Even in languages that are suitable, you can run into a different limit - the size and complexity of pipelines. The ones I've shown in this article are small and linear. My general habit is to write small functions, I get twitchy if they go over half-a-dozen lines, and similar rules are there for pipelines. Larger pipelines need to be factored into separate methods, following my usual rule: extract a method whenever there a difference between what a block of code does and how it does it.
Pipelines work best when they are linear, each step has a single collection input and single output. It is possible to fork to separate inputs and outputs, although I've not put any such examples together in this article. Again, however, beware of this - factoring into separate functions is usually the key to keeping any longer behavior under control.
That said, collection pipelines are a great pattern, one that all programmers should be aware of, particularly in languages like Ruby and Clojure that support them so well. They can clearly capture what otherwise requires long and gnarly loops, and can help make code more readable and thus cheaper and easier to enhance.
Operation Catalog
Here is a catalog of the operations that you often find in collection pipelines. Every language makes different choices on what operations are available and what they are called, but I've tried to look at them through their common capabilities.
collect
Alternative name for map, from Smalltalk. Java 8 uses "collect" for a completely different purpose: a terminal that collects elements from a stream into a collection.
fold
Alternative name for reduce Sometimes seen as foldl (fold-left) and foldr (fold-right).
reduce
Uses the supplied function to combine the input elements, often to a single output value
For articles on similar topics…
…take a look at the following tags:
Footnotes
1: A more idiomatic lisp pipeline
One issue here is that the lisp example isn't that idiomatic,
since it's common to use named functions (easily referenced
using the
#'some-function syntax) - creating small
functions for particular cases as you need them. This might be a
better factoring of that example.
(defun nosqlp (article) (member 'nosql (article-tags article))) (subseq (sort (remove-if-not #'nosqlp (some-articles)) #'< :key #'article-words) 0 3)
2: Java Pipeline
Here's the initial pipeline in Java
articles.stream() .filter(a -> a.getTags().contains("nosql")) .sorted(Comparator.comparing(Article::getWords).reversed()) .limit(3) .collect(toList());
As you might expect, Java manages to be extra verbose in several respects. A particular feature of collection pipelines in Java is that the pipeline functions aren't defined on a collection class, but on the Stream class (which is different to IO streams). So you have to convert the articles collection into a stream at the beginning and back to a list at the end.
3:
Rather than passing a block to a ruby method, you can pass a
named function by preceding its name (which is a symbol) with "&" - hence
&:words. With reduce, however
there is an exception, you can just pass it a function's name,
so you don't need the "&". I'm more likely to use a function
name with reduce, so I appreciate the inconsistency.
4: Using a lambda or a function name
There's an interesting language history in the choice between using lambdas and function names, at least from my dabblers perspective. Smalltalk extended its minimal syntax for lambdas, making them easy to write, while calling a literal method was more awkward. Lisp, however, made it easy to call a named function, but required extra syntax to use a lambda - often leading to a macro to massage that syntax away.
Modern languages try to make both easy - both Ruby and Clojure make either calling a function or using a lambda pretty simple.
5: There is a polyseme here as "map" may refer to the operation map or to the data structure. For this article I'm going to use "hashmap" or "dictionary" for the data structure and only use "map" for the function. But in general conversation you'll often hear hashmaps referred to as maps.
6: Using Juxt
One option in clojure is to run multiple functions inside a map using juxt:
(->> (articles) (group-by :type) (map (juxt first (comp count second))) (into {}))
I find the version using a lambda to be clearer, but then I'm only a dabbler in Clojure (or functional programming in general).
Acknowledgements
My thanks to my colleagues who commented on an early draft of this article: Sriram Narayanan, David Johnston, Badrinath Janakiraman, John Pradeep, Peter Gillard-Moss, Ola Bini, Manoj Mahalingam, Jim Arnold, Hugo Corbucci, Jonathan Reyes, Max Lincoln, Piyush Srivastava, and Rebecca Parsons.
Significant Revisions
15 October 2014: added section on nested operator expressions
12 September 2014: added distinct and slice to operation catalog
28 July 2014: published final installment
24 July 2014: published fourth installment with alternatives
23 July 2014: published third installment with index inversion example
22 July 2014: published second installment with first two examples and the catalog
21 July 2014: published first installment
|
http://martinfowler.com/articles/collection-pipeline/
|
CC-MAIN-2014-42
|
refinedweb
| 6,764
| 59.64
|
Up to [cvs.NetBSD.org] / src / lib / libc / gen
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.12.56.1 / (download) - annotate - [select for diffs], Tue Oct 30 18:58:48 2012 UTC (3 years, 6:44 2012 UTC (3 years, 10,], Fri Oct 16 12:47:45 1998 UTC (17 years,.9: +6 -2 lines
Diff to previous 1.9 (colored)
Need an internal name for signal().
Revision 1.9 / (download) - annotate - [select for diffs], Mon Jul 21 14:07:33 1997 UTC (18: :46:17 1997 UTC (18 years, 10 months ago) by christos
Branch: MAIN
Changes since 1.7: +3 -2 lines
Diff to previous 1.7 (colored)
Fix RCSID's
Revision 1.7.4.1 / (download) - annotate - [select for diffs], Thu Sep 19 20:03:47 1996 UTC (19 years, 8 months ago) by jtc
Branch: ivory_soap2
Changes since 1.7: +3 -2 lines
Diff to previous 1.7 (colored) next main 1.8 (colored)
snapshot namespace cleanup: gen
Revision 1.5.4.1 / (download) - annotate - [select for diffs], Tue May 2 19:35:12 1995 UTC (21 years ago) by jtc
Branch: ivory_soap
Changes since 1.5: +2 -1 lines
Diff to previous 1.5 (colored) next main 1.6 (colored)
#include "namespace.h"
Revision 1.7 / (download) - annotate - [select for diffs], Sat Mar 4 01:56:04 1995 UTC (21.6: +2 -2 lines
Diff to previous 1.6 (colored)
fix up some RCS Id's i botched.
Revision 1.6 / (download) - annotate - [select for diffs], Mon Feb 27 05:51:17 1995 UTC (21 years, 3 months ago) by cgd
Branch: MAIN
Changes since 1.5: +9 -4 lines
Diff to previous 1.5 (colored)
merge with 4.4-Lite, keeping local changes. clean up Ids
Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Sat Feb 25 09:12:37 1995 UTC (21 years,.5 / (download) - annotate - [select for diffs], Tue Nov 30 21:21:45 1993 UTC (22.4: +3 -3 lines
Diff to previous 1.4 (colored)
Renamed _sigintr to __sigintr. _sigintr is in the user's namespace.
Revision 1.4 / (download) - annotate - [select for diffs], Thu Aug 26 00:45:08 1993 UTC (22 years, 9 months ago) by jtc
Branch: MAIN
Changes since 1.3: +2 -2 lines
Diff to previous 1.3 (colored)
Declare rcsid strings so they are stored in text segment.
Revision 1.3 / (download) - annotate - [select for diffs], Fri Jul 30 08:23:27 1993 UTC (22 years, 9 months ago) by mycroft
Branch: MAIN
Changes since 1.2: +2 -1 lines
Diff to previous 1.2 (colored)
Add even more RCS frobs.
Revision 1.2 / (download) - annotate - [select for diffs], Wed Jun 16 22:12:16 1993 UTC (22 years, 11 months ago) by jtc)
According to Ansi C, signal is supposed to return SIG_ERR on error, not BADSIG. I know they are the same thing, but this allows me to remove the otherwised unused, bogus macro BADSIG from signal.h
Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Sun Mar 21 09:45:37 1993 UTC (23 years, 2.
|
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/signal.c
|
CC-MAIN-2016-22
|
refinedweb
| 526
| 85.49
|
Chapter 12: Componentes e plugins - TODO"::
Trapped Ajax links
Normally a link is not trapped, and by clicking in a link inside a component, the entire linked page is loaded. Sometimes you want the linked page to be loaded inside the component. This can be achieved using the
A helper:_1<<
The files that compose a plugin are not treated by web2py any differently than other files except that admin understands from their names that they are meant to be distributed together, and it displays them in a separate page:
":
:
Notice how all the code except the table definition is encapsulated in a single function called
_ so that it does not pollute the global namespace. Also notice how the function creates an instance of a
PluginManager.
Now in any other model in your app, for example in "models/db.py", you can configure this plugin as follows::
You can override these parameters elsewhere (for example in "models/db.py") with the code:
You can configure multiple plugins in one place..
Plugins repositories
While there is no single repository of web2py plugins you can find many of them at one of the following to URLs:
Here is a screenshot from the s-cubism repository:
|
http://web2py.com/books/default/chapter/37/12/componentes-e-plugins-todo
|
CC-MAIN-2017-22
|
refinedweb
| 204
| 54.76
|
Help:Collections/Feedback< Help:Collections
Wikibooks Collections Create a collection · About printed books · Help for beginners · Help for experts · Questions · Post feedback · All collections
Proposal: require javascriptEdit
Please make it possible, that javascript is not needed to use this nice service. Thanks!
- Added it to their bug tracker. — Mike.lifeguard | talk 23:41, 28 October 2008 (UTC)
Question: QuizEdit
Is there a way to use the quiz together with collections? For example: it would be nice if the software could show the answers that are in the quiz code in a section after the exercises... But until now, even the questions are not displayed correctly... Helder 00:00, 1 April 2009 (UTC)
- As far as I know the quiz extension is not supported by wikibooks. If it was, you should be able to use the standard tricks to have a different on-screen and print version, i.e. <includeonly> and <noinclude> if you use the traditional print version approach and substitution templates if you use collections. --Martin Kraus (talk) 13:31, 1 April 2009 (UTC)
Question: image and codeEdit
Hi, I have made pdf from page : Fractals/Iterations_in_the_complex_plane/Julia_set Julia set. It looks good. The only problem is that the code is on the image . See for example section "External distance estimation" in pdf file. Here psedocode : "if (LastIteration==IterationMax) then { /* interior of Julia set: color = black */ }...." Is on the image : Julia set : image with C source code. How can I change it ? Regards. --Adam majewski (talk) 07:47, 30 May 2009 (UTC)
Bug: Errors in creating ODF bookEdit
Trying to createžaliems:_openSUSE gets this error:
An error occured on the render server: traceback Traceback (most recent call last): File "/home/pp/local/lib/python2.6/site-packages/mwlib-0.12.13-py2.6-linux-x86_64.egg/mwlib/apps/render.py", line 177, in __call__ writer(env, output=tmpout, status_callback=self.status, **writer_options) File "/home/pp/local/lib/python2.6/site-packages/mwlib-0.12.13-py2.6-linux-x86_64.egg/mwlib/odfwriter.py", line 775, in writer w.writeBook(book, output=output) File "/home/pp/local/lib/python2.6/site-packages/mwlib-0.12.13-py2.6-linux-x86_64.egg/mwlib/odfwriter.py", line 146, in writeBook self.write(e, self.doc.text) 234, in write if not saveAddChild(parent, e): File "/home/pp/local/lib/python2.6/site-packages/mwlib-0.12.13-py2.6-linux-x86_64.egg/mwlib/odfwriter.py", line 191, in saveAddChild p.addElement(c) File "/home/pp/local/lib/python2.6/site-packages/mwlib-0.12.13-py2.6-linux-x86_64.egg/mwlib/odfwriter.py", line 85, in addElement log("ParagraphProxy:addElement() ", e.type, "not allowed in any parents, failed, should have been added to", self.type) AttributeError: Element instance has no attribute 'type' sys.argv=['/home/pp/local/bin/mw-render', '--logfile', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/mw-render.log.odf', '--error-file', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/errors.odf', '--status-file', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/status.odf', '--writer', 'odf', '--output', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/output.odf', '--pid-file', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/pid.odf', '--metabook', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/metabook.json', '--keep-zip', '/home/pp/cache/serve/1c/1cd5dd77142f3cd2/collection.zip', '--config', '', '--template-blacklist', 'MediaWiki:PDF Template Blacklist', '--template-exclusion-category', 'Exclude in print', '--print-template-prefix', 'Spausdinti', '--print-template-pattern', '$1/Print', '--script-extension', '.php', '--language', 'lt']
I found that this related to:
- Some links has # symbol
- There is some <!-- ... --> markup
- Name of article has : symbol
But PDF generates sucessfully!
Can somebody fix this ODF book creation error?
Bug: repeats the first 5 entries in the table of contentsEdit
On the first page, the first five items are shown with page numbers twice Pearts (talk) 23:11, 31 July 2010 (UTC) as in
Contents Editing Wikitext/Pictures/Images in Containers 43 Editing Wikitext/Tables 58 Editing Wikitext/Tables Ready to Use 70 Editing Wikitext/Making Templates A101 75
Bug: pdf book font is kinda smallEdit
the book font look like it's a 8pt font, thats really small, it should be a 10 or 12 point font. Pearts (talk) 23:16, 31 July 2010 (UTC)
Images can be improperly positionned in generated PDFEdit
Hi,
I've generated a PDF form the Blender_3D:_Noob_to_Pro wikibook. Resulting PDF is : here (A4 format).
Result is good, except for some pages where images (which appeared centered on the online version) are moved partially outside the right margin in the generated PDF. Example of such page incorreclty generated is Blender_3D:_Noob_to_Pro/Coordinate_Transformations (the 3rd image of the LEM is centered online, but incorrect in the generated PDF).
Please help. A.M.Reng. (discuss • contribs) 11:11, 19 August 2011 (UTC)
Cannot save collection,using Collections toolEdit
I use IE9, and I have been a user here for over a month. I cannot fill in a collection file name for either my namespace or the wiki space -- the input field is disabled. And so is the "submit" button for the field.
If I modify the HTML on-the-fly from my browser to fill in the collection name and enable the submit button, then hit the submit button, the save never happens. Yesterday, when I tried this, I got a bunch of wiki errors.--AngelinaBelle (discuss • contribs) 11:25, 19 October 2011 (UTC)
- I have filed a bug report for this, as I see it's also been reported at Wikipedia and I was able to reproduce the error when saving. I had to try to save to the Wikibooks namespace, as I was able to fill in the text field for the Wikibooks namespace, but not the user namespace. – Adrignola discuss 14:20, 19 October 2011 (UTC)
A few issues with PDF renderEdit
I like collections, but the output just isn't up to it at the moment and it looks like I'll have to go back to print versions. You can find the issues here in a collection I've been trying to build:User:Pluke/Collections/Unit2
- <syntaxhighlight> tag not working
- passing <syntaxhighlight> to a template causes the template parameter to not be passed correctly, defaulting back to {{{1}}}
- You can't have an exercise box that spreads over more than one page
- See Boolean gate combination
- internal wikipedia links are not highlighted, but present ie. using: [[w:apple|apple]]
- inconsistent rendering of CSS / styles
- box widths not 100% when they are in wiki view
- background colours missing from most css
- style formatting of text in tables doesn't seem to work
- wikitable display is inconsistent
- table caption is displayed incorrectly |+ '''Examples of Boolean Algebra shown in a truth table''' (this may be a result of nested tables not being handled correctly)
- transparency on SVGs appears as plain black
- <big> tag not working
- {{cquote}} rendering is a little clumsy, the speech marks over lap the text
Pluke (discuss • contribs) 22:12, 20 December 2011 (UTC)
Section NumbersEdit
I'm in the process of adding chapter and section numbers for a collection, specifically Wikibooks:Collections/GLSL_Programming_in_Unity. The idea is to hard code chapter and section numbers in that page and to have a template ({{GLSL Programming Unity SectionRef}}) with hard coded chapter and section numbers to produce numbered references in the output for print. Whenever the numbering changes (e.g. when sections are added, removed, or moved around), these two pages but no other pages have to be changed. Is there an easier way to achieve a similar result? --Martin Kraus (discuss • contribs) 22:01, 19 May 2012 (UTC)
Positive FeedbackEdit
I am not sure whether this is the proper venue for submitting such feedback, but it's the only one I've found so far. I am very impressed with the quality of the output of the PDF generation. I mean, I'm no document expert or anything, but it exceeds anything I expected. Quite excellent. If anyone could pass on my congratulations and my thanks to the people who made the PDF generation system possible, I would be grateful. A job well done. -- 00:39 28 January 2013 (UTC)
A request for HTML outputEdit
Some time ago I downloaded the wikibook Blender 3D: Noob to Pro. Originally it was in three large HTML files, "Beginner Tutorials", "Advanced Tutorials", and "Miscellaneous Tutorials". This was very convenient and made it very easy to use offline while teaching myself how to use Blender. Unfortunately, in the intervening period some misguided person has spent a lot of effort cutting up the three large pages into what are now 254 small pages, some consisting of only a single paragraph. I'm sure this person meant well and undoubtedly had in mind the oft-repeated saying that webpages should consist of little more than a screenful of data. (I have tried to find the origin of this suggestion that smaller pages are somehow "better". It always seemed to me a rather patronising belief, as if we "ordinary folk" are not capable of maintaing our attention for more than a few minutes at a time. It seems very out of place on the Wikibooks site, where people have deliberately come to look for long-form content.) Sadly, this decimation has made what was a wonderful resource into something completely unusable for offline reading.
So I tried today to collect all 254 pages into a downloadable book, with disastrous results.
It took about 4 hours going through all the pages to click on the "Add this page to your collection" link at the head of each page. (The hover "Add this page to your collection" links on the main index page don't display.) I finally had the collection ready to be rendered and downloaded. Away it went rendering for what may have been half an hour, then presented me with the link to download the epub. (I intended to unzip the epub later and pull out the HTML files that comprise it.) I clicked the link and I'm not sure how long it took then displaying a blank page (20 minutes?) before finally saying that an error had occurred. I went back to the earlier page and tried downloading again. Another error and a note saying that my collection seems to have been deleted (though I don't think it was). I tried a few more times, including attempting to download as OpenDocument format (I could convert that to HTML later).
Interestingly I looked around in the /tmp directory on my machine and found three 13MB files that are incomplete PDF documents with times corresponding to my attempts to download the epub and OpenDocument files... but I never requested PDF. (I've since tried a test on a single page, and the collection process only delivers PDF.)
May I request adding HTML format to the book collection formats? It would be much easier to do than the other three formats. PDF is terribly bloated and suited only to printing on paper. OpenDocument format is similar, but at least possible to convert to other formats. The epub format is a conveniently compact reader format, but none of the three formats allow animated gifs or sounds and videos to play. HTML alone is capable of printing, displaying animated gifs and playing sound and video.
Can I also request that the policy of slicing pages into ever smaller and smaller pieces be reversed? It is a terribly retrograde step that makes books virtually unusable unless you are lucky enough to have cheap, fast, always-on internet. Most of the planet still has expensive, slow, occasional internet. Remember, this isn't Wikipages or Wikiparagraphs, it is Wikibooks. People come here looking for large chunks of information: books. To serve it up to them in tiny chunks is to sorely underestimate them, and as it did for me today, makes life far more difficult than it need be. It has sadly made Blender 3D: Noob to Pro completely unusable for offline work. It's like tearing the pages out of a paper book and handing them to a reader one at a time as they request them, instead of letting the reader have the whole book to read as they wish. It makes no sense.
Miriam e (discuss • contribs) 08:47, 9 December 2013 (UTC)
- Splitting the tutorials into smaller pages was done to make it easier to find specific tutorials. Having them all on one page is great for reading through the entire contents, in order, or for reference offline, but it isn't so useful for being able to find the specific thing you're looking up. Also, splitting these into separate pages makes it considerably easier to see which tutorials need to be fleshed out (those with just a single paragraph, for example), which is one of the areas in which this particular book needs lots of improvement.
- I agree that HTML export would be useful, though there is one important caveat that isn't noted here. Images don't come with HTML files - with animated GIFs being one of your listed concerns, that's a big deal, here. So it'd have to be an HTML archive of some sort. Still. It'd be useful.
- That said, PDF actually does support animated GIFs, audio files, and video. These files can easily be embedded into the PDF. I'm not sure whether the PDF renderer is smart enough to actually do that, but it does seem like a good idea to add if not.
HTML Printable VersionEdit
If Collections could be output as HTML, than the Collections feature could supersede the current
{{printable}} feature, and all forms could easily stay in sync with one another. Pat Hawks (discuss • contribs) 03:29, 25 August 2016 (UTC)
|
https://en.m.wikibooks.org/wiki/Help:Collections/Feedback
|
CC-MAIN-2017-51
|
refinedweb
| 2,285
| 61.56
|
Diamond - 0.25 ct - Round - D (colourless) - SI1
Comes with an GIA gemological Diamond focus report. (Includes: official GIA sticker results, laser inscription of report number on the diamond and full report on gia.edu website).
GIA is the world’s foremost authority on diamonds.
We ship with FedEx delivery, tracable and insured. might need to clear the shipment from the Diamond
Office in Antwerp.
* Chinese customers: loose Diamonds can only be shipped to Hong Kong.
* French customers, shipping costs includes an external customs broker agent to
import loose Diamonds to France.
Check our other items for sale by clicking on our seller name above (DiamondsExpress)
If you have any question please contact us.
- Número de piedras
- 1
- Piedra
- Diamante
- Peso total en quilates
- 0.25
- Forma / corte
- Redondo
- Color blanco
- D (incoloro)
- Claridad
- GIA, SI1
- Tratamiento
- Natural (sin tratamiento)
- Certificación
- GIA
- Sellado
- No
- Grabado con láser
- Yes
|
https://www.catawiki.es/l/27957021-1-pcs-diamante-0-25-ct-redondo-d-incoloro-si1-gia
|
CC-MAIN-2020-34
|
refinedweb
| 148
| 56.86
|
Hi, guys!! I'm Sridhar Janardhan back with another ibles.Today I am going to teach you how to send data from mobile to Arduino and display using it an LCD.This is achieved by using HC-05 Bluetooth module.
Step 1: Components Required:
The components required for this ibles are:
- Arduino Uno
- Breadboard
- LCD
- Bluetooth module
- Jumper Wires
Let's start to connect the Bluetooth module
Step 2: Bluetooth Module Connection:
The Bluetooth module is used for transmitting data wirelessly from the transmitter to receiver.The hc-05 module works on the same principle but on the different operation.let me explain the basic pins of the Bluetooth module
The HC-05 Bluetooth module has four pins:
- TX pin - Transmitting pin which is used to transmit the data
- RX pin - the pin that receives data from the receiver.
- VCC pin -power supply pin
- GND pin - power supply pin
The connection of the module is as follows:
- TX pin to the RX pin of Arduino
- RX pin to the TX pin of Arduino
- VCC pin to the Positive railings of the breadboard
- GND pin to the negative railings of the breadboard
Step 3: LCD Interface:
Interfacing an LCD to an Arduino is hectic as it has much connection and also spoils the beauty of the circuit by its ugly wire.To avoid these stuff I2C is used.
The connection of the LCD is as follows:
- VCC- to the positive railing of the breadboard.
- GND- to the negative railing of the breadboard.
- SDA- To Arduino analog pin A4.
- SDL- To Arduino analog pin A5.
Now let's start coding.
Step 4: Coding
#include <LiquidCrystal_I2C>
LiquidCrystal_I2C lcd(0x3f, 16, 2);
void setup() {
lcd.begin();
Serial.begin(115200);
}
void loop() {
if(Serial.available()){
lcd.write(Serial.read());
}
}
9 Discussions
Question 2 months ago on Step 5
Hello... I need help to use the less expensive Nano boards on the market... they no longer allow me to download anymore... can you help??? I have downloaded and installed the drivers that they recommend, however they still don't work???
Question 2 months ago on Step 5
the code is not working..plz help
Question 5 months ago
Do you think this would work with a NRF24L01 instead of the WIFI?
Question 10 months ago on Step 1
Iam trying to add bluetooth to my ipod 5th gen 30gb .i have the i flash quad,+1900mAh.
Could this be done using arduino parts??
Answer 6 months ago
it is sometime difficult to connect with bluetooth in ipad. keep trying for the connetion or change the bluetooth app. after this it will not work please let me now.
1 year ago
Can you provide the exact library files, That you used.
1 year ago
Nice work! But what did you use for your power supply?
1 year ago
What app did you use to send the message?
Reply 1 year ago
Blueterm
|
https://www.instructables.com/id/LCD-Display-Via-Bluetooth/
|
CC-MAIN-2019-04
|
refinedweb
| 485
| 74.19
|
Re: Decompiler.NET reverse engineers your CLS compliant code
From: Jonathan Pierce (support_at_junglecreatures.com)
Date: 09/25/04
- ]
Date: Sat, 25 Sep 2004 18:33:36 GMT
> Isn't that a matter of perspective? tiny to you... perhaps could have
> cost
> a business money... Not too small to them is it?
Our customers test their own software. If they encounter any code generation
issues related to using our product, they would contact us and we would
resolve the issue by releasing an updated version that corrects the issue.
The small issues that you identified such as your nested array
initialization example resulted in generated code that would expose the
issue at compile tiime. Since noone reported it, the issue either didn't
affect them, or they fixed the generated code manually and didn't tell us.
Either way, it couldn't have cost them any money and I'm sure none were
aware of it since our customers enjoy the support that we provide to them
and would not hesitate to let us know if they detected any code generation
issues.
> Him... must be why everyone on *here* says your product is inferior.
The people shouting on here are you, Nak, and <a>. None of you are respected
by anyone in this group, you only post intentionally malicious messages
designed to create controversy, and have been reprimanded by several others
in this thread besides myself. All of you attempted to remain anonymous, and
continue to post intentionally false statements about our company and
product. You have even commented that you are intentionally looking to start
up non-technical arguments just for your own entertainment at the expense of
the time that you continie to waste for everyone else still reading this
thread who is really interested in decompilation technical issues. Since you
are not, you and your friends should probably leave since you are not
interested anyway in our products, and you are wasting the time of the
entire developer community reading this thread because they are interested
in the technical content that it contains and have not yet given up trying
to filter out your spam messages.
> I just hear the same
> speech over and over about how many bugs you've reported to other
> companies,
> and how you have no bugs outstanding?
You read this thread voluntarily.Since you aren't interested in our
products, and noone else wants to read your non-technical posts, then why
don't you just stop reading the thread since it's title indicates that it is
related to our product which you are not interested in. Your false negative
statements about it serve no purpose except to waste everyone's time
including your own. You and your friends should stop attacking our company
and others in public newsgroups if you goal is to not give those companies
additional exposure, but we must respond to defend our products each time
one of you posts inaccurate and misleading information that interferes with
the success of serious developers who are genuinely intersted in the
technical topics that our product addresses.
> There are probably MANY
> bugs in your software that your not aware of. That's software. Which
> makes
> coming on here and bragging about 100% kinda foolish.
We are not bragging. We are asking you to support the false claims that you
make about fiicticious bugs and negative user experiences using our
products.
After all, aren't
> there known bugs within the framework itself, which you yourself admit to
> using? Particularly the reflection namespaces? In order for your
> software
> to be perfect it would mean Microsoft's software would have to be perfect.
Again, we didn't say anything about perfect, you did. If we use a 3rd party
library or part of the framework, we adapt our implementation to avoid it
and report it to Microsoft or the vendor involved. We have posted several
Whidbey related bugs to the 2005 feedback center that were confirmed and
fixed for newer builds of the 2.0 framework.
>
>> We do not actively search our competitors web sites looking for their
>> test
>> cases, but we do notify them directly when we identify issues in their
>> products.
>
> Why not? I would get as much info from my competitors as I could!
>
We do the best we can any anyone concerned will let us know if they become
aware of any actual problem in our software as opposed to the fictitiuos
statements that you and your friends keep making here. Instead of spending
so much time creating false propaganda, you could help the developer
community by posting real bug reports that you detect in the framework or
our products.
> Oh... sorry... Every time I hear a long winded speech with absolutly no
> substance to it except for the same thing written *over* and *over* again
> I
> instantly think I have to start clapping at the end.
You should be apologizing to everyone else here whose time you keep wasting
by posting these spam messages to this technical forum that require us to
respond and defend our products against your false claims. For our benefit,
yours, and everyone elses here, please stop posting messages that continue
to waste the time of all of us here. Go ahead and lurk if you like and
contact us offline, but I don't want to continue to assist you in filling up
these newsgroups with non-technical arguments that everyone has to filter
out to find the technical information that they are looking for. I probably
should never have responded to any of you in the first place, except that
your attacks on us and our products require us to respond to defend
ourselves and diffuse your intentionally misleading statements.
In short, please write us privately if you feel it necessary, but leave
these groups to technical discussions that developers here are interested in
reading. Some of the other people here have spoken up to discourage your
inappropriate behavior here, and you have probably scared a few people who
don't want to have to waste their own time arguing with you, but I'm sure
that the majority of the people in this newsgroup and the ones reading this
thread would like you, Nak, and <a> to just go away from this thread, this
group, and any news server that they are interested in for it's technical
content.
Jonathan
- ]
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2004-09/4928.html
|
crawl-002
|
refinedweb
| 1,068
| 54.86
|
Cannot added DFS namespace to 2012 server
- Saturday, January 12, 2013 1:24 AM
I have a DFS namespace that was created on a Windows 2003R2 server many years ago. We recently added a new branch location and put out our first Server 2012 DC (which was an upgrade from a 2008R2 server that was on a different domain owned by the company we purchased). I am now trying to add our DFS namespace to this server and keep getting the following error on my 2K3 server when I try to add the 2012 server:
I have the DFS tools installed on the 2012 server and there are no namespaces showing. Just for grins I added and deleted a couple local namespaces after getting the error just to see what happens and those work fine.
Not sure what else to look at here I recently added a 2008R2 server and that went fine. No changes were made to my DFS between then and now.
All Replies
- Saturday, January 12, 2013 5:12 AM
Hello,
Since you have a lower functional lever with the 2k3 DC, the 2012 cannot host it. It must be hosted on the 2K3 server.
You can optionally move AD services to the newer 2012 servers and then demote the 2k3 to member servers so that you can raise the functional level and host DFS namespace on the 2012 server.
Miguel Fra | Falcon IT Services, Miami, FL | | Blog
- Saturday, January 12, 2013 4:53 PM
Sorry I did not specifiy the 2K3 machine is not a DC its a standalone fileserver.
I was wondering if it still might be something along these lines since it appears it's running the "2000 level of DFS not the 2008" after doing some looking around (but this I am not positive of I did not see a way to check the DFS level).
If it matters the forest is at 2003 funcational until I manage to replace about 10 more DC's but the DFS owner is not a DC. If this is still part of the problem would it resolve to issue to upgrade just this machine to 2012? So far I have been unable to find anything stating you can't add a 2012 machine to a 2003 DFS.
This is just a DFS share for a bunch of files all our users need nothing special...
- Saturday, January 12, 2013 5:08 PM
Hello,
In order to run the new 2012 server as a DFS host, you will need to promote the DFS functional level from 2000 to 2008 which means getting rid of any W2k3 DC's. You can also try to update your Schema to 2003 R2 or 2008, this will ad new objects to AD for DFS. You may be able to then use 2008 DFS to host a namespace without promoting the functional level to 2008, but I am not 100% sure. I can't find any documentation on line to support this theory.
You said you added a 2K8R2 with DFS? Is it actually hosting a namespace or it's just a target? AFAIK, to host a namespace in 2008, you need your DFS functional level raised. Perhaps Monday some MS folks camn shed some light on this.
Miguel Fra | Falcon IT Services, Miami, FL | | Blog
- Edited by Miguel FraMicrosoft Community Contributor Saturday, January 12, 2013 5:26 PM
- Edited by Miguel FraMicrosoft Community Contributor Saturday, January 12, 2013 5:33 PM
-
- Monday, January 14, 2013 8:08 AMModerator
Hi,
1. Please test to do this with command line instead:
dfsutil /AddFtRoot /Server:<server name> /Share:<sharename>
2. Try to use the Windows 2008 R2 DFS console instead of the Windows 2003 R2 version and see if there is any difference.
TechNet Subscriber Support in forum |If you have any feedback on our support, please contact tnmff@microsoft.com.
- Proposed As Answer by Miguel FraMicrosoft Community Contributor Tuesday, January 15, 2013 1:02 AM
- Marked As Answer by C-M Wednesday, January 16, 2013 8:36 PM
-
- Monday, January 14, 2013 7:59 PMI was able to add the 2012 server to the namespace from a 2008R2 server.
|
http://social.technet.microsoft.com/Forums/en-US/winserverfiles/thread/92bc6714-a1b0-470a-859d-bd05678494bb
|
CC-MAIN-2013-20
|
refinedweb
| 695
| 63.93
|
>>?
From Good Code to GreatNovember 12, 2010 at 11:15 AM | Tags: python, documentation, testing, thoughts, narrative
This post was import from an earlier version of this blog. Original here.
Why
For code, documentation and tests separate the good from the great. No big surprise there; we already know we should be writing docs and tests. What’s been less explored is the effect that writing docs and tests has on the code itself. Along the way, I hope to offer some insight into what good docs and tests look like, and why we don’t write them more often.
I’m a test-early guy, not test-first. During the initial braindump stage of a project, things are flying around too quickly for the tests to keep up: methods getting refactored, classes split in half, idioms invented and rearranged. I don’t want to break the stream of ideas by maintaining a test suite that’s going to be 80% out of date every two days. That’s how I do things – if you like to write your tests first, good (and you’re probably working on mathematical or otherwise very functional code).. That sucks. It’s boring, it takes a long time, and it sucks. For Twiggy, I have almost twice as many lines of test code as lines of code code, almost all of them written around the same time. Far and away the least fun part of developing.
Coverage: escaping the suck
Coverage is a rope out of the testing suck-hole. Using the reports, it can turn testing into a little game of inching your percentage up. Make sure to test a single unit of code at a time; otherwise, it’ll give higher numbers than you deserve, as modules import and use each other.).
I print my docs as I work on them, often. I think I killed half a tree while writing Twiggy’s documentation. Taking a pen to paper helps me focus on one section at a time, while improving the overall flow. There’s something about seeing your documentation in all its fully-formatted glory that causes the changes you need to make to pop out at you.
API
API docs mirror the structure of the code – a fairly straightforward explanation of methods, classes, argument types, etc.. They’re often just the docstrings:
def frobnicate(x): """ :arg int x: how hard to frob """
Docs like this don’t really tell readers much, and aren’t adding to our understanding. They’re still necessary as a reference and the minor details are important, but at best, they save us from having to re-read the code every time we want to use it. Most projects, especially non-public ones, stop here.
API docs are the documentation that programmers write for ourselves. They’re the absolute minimum necessary for another person to use your code, but they don’t give a reader anything to grab on to if they aren’t intimately familiar with the project to begin with.
Reference
Reference documentation is higher-level. It describes how to use the features of an application or library to accomplish particular tasks. Use cases, basically. Reference docs are well suited for readers who are familiar with the problem you’re working on, but not your specific solution. Most documentation for open source projects consists of reference docs.
Occasionally, writing these docs will lead to ideas for new features, or point out problems with existing ones.
Narrative.
Taking the time to write these docs reveals ways that the code could be cleaner, simpler, easier and more intuitive. You’ll change your code so you can tell a better story about it. Unlike API or reference docs, there’s no existing structure to organize around. So I often begin there – what are the important points I want to cover? A phrase or short example is often enough to start – the details get slowly filled in as I come back and iterate.
Giving talks helps here, and not only for the feedback from a live audience (blank stares vs. nods). A presentation forces you to explain your project concisely and clearly to audience that’s there for the pizza and beer. The shorter the talk the better – as a speaker, you’re not going to learn anything by taking an hour. Thirty minutes max, and I’m a huge fan of the five minute lightning talk. But that’s a subject for another post.
Great takes time
If you’re reading this, you can probably already write good code (hey, I know my audience). In my opinion, that means you can write great code. Doing so takes time – and not necessarily where we expect. By writing tests and narrative documentation, we gradually discover how our code can be improved. That process is often harder and takes longer than we would wish; but the great code that results is the reward.
Pete cooks, rides bikes and hacks Python. Maybe for you? Don’t worry, he wears pants.
These comments were imported from an earlier version of this blog.
Tshepang Lekhonkhobe 2010/11/13 13:39:13 -0800
Excellent entry... thanks.
|
http://i.wearpants.org/blog/from-good-code-to-great/
|
CC-MAIN-2014-42
|
refinedweb
| 862
| 72.05
|
Glossary¶
This is a glossary for some definitions used in this documentation and still under construction.
- .po
The file format used by the gettext translation system.
- Acquisition
Simply put, any Zope object can acquire any object or property from any of its parents. That is, if you have a folder called A, containing two resources (a document called homepage and another folder called B), then an URL pointing at http://…/A/B/homepage would work even though B is empty. This is because Zope starts to look for homepage in B, doesn’t find it, and goes back up to A, where it’s found. The reality, inevitably, is more complex than this. For the whole story, see the Acquisition chapter in the Zope Book.
- AGX
AGX is short for ArchGenXML.
- Archetypes
Archetypes is a framework designed to facilitate the building of applications for Plone and CMF. Its main purpose is to provide a common method for building content objects, based on schema definitions. Fields can be grouped for editing, making it very simple to create wizard-like forms. Archetypes is able to do all the heavy lifting needed to bootstrap a content type, allowing the developer to focus on other things such as business rules, planning, scaling and designing. It provides features such as auto-generation of editing and presentation views. Archetypes code can be generated from UML using ArchGenXML.
- ArchGenXML
ArchGenXML is a code-generator for CMF/Plone applications (a Product) based on the Archetypes framework. It parses UML models in XMI-Format (
.xmi,
.zargo,
.zuml), created with applications such as ArgoUML, Poseidon or ObjectDomain. A brief tutorial for ArchGenXML is present on the plone.org site.
- ATCT
ATContentTypes - the Plone content types written with Archetypes which replaces the default CMF content types in Plone 2.1 onwards.
- BBB
When adding (or leaving) a piece of code for backward compatibility, we use a BBB comment marker with a date.
- browserview
Plone. See views
- Buildout
Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later. See buildout.org
- Catalog
The catalog is an internal index of the content inside Plone so that it can be searched. The catalog object is accessible through the ZMI as the
portal_catalogobject.
- CMF
The Content Management Framework is a framework for building content-oriented applications within Zope. It as formed the basis of Plone content from the start.
- Collective find it, use it, and contribute fixes and improvements.
- control panel
The Control Panel is the place where many parameters of a Plone site can be set. Here add-ons can be enabled, users and groups created, the workflow and permissions can be set and settings for language, caching and many other can be found. If you have “Site Admin” permissions, you can find it under “Site -> Site Setup” in your personal tools.
- CSS
Cascading Style Sheets is a way to separate content from presentation. Plone uses this extensively, and it is a web standard documented at the W3C web site. If you want to learn CSS, we recommend the W3Schools CSS Resources and the SitePoint CSS Reference.
- Dexterity
Dexterity is an alternative to Archetypes, Plone’s venerable content type framework. Being more recent, Dexterity has been able to learn from some of the mistakes that were made Archetypes, and - more importantly - leverage some of the technologies that did not exist when Archetypes was first conceived. Dexterity is built from the ground up to support through-the-web type creation. Dexterity also allows types to be developed jointly through-the-web and on the filesystem. For example, a schema can be written in Python and then extended through the web.
- Diazo
The standard way to theme Plone sites from Plone 5 onwards. It consists in essence of a static ‘theme’ mockup of your website, with HTML, CSS and JavaScript files, and a set of rules that will ‘switch in’ the dynamic content from Plone into the theme.
- Document
A document is a page of content, usually a self-contained piece of text. Documents can be written in several different formats, plain text, HTML or (re)Structured Text. The default home page for a Plone site is one example of a document.
- DTML
Document Template Markup Language. DTML is a server-side templating language used to produce dynamic pieces of content, but is now superseded by ZPT for HTML and XML content. It is still used sparingly for non-XML content like SQL and mail/CSS.
- Dublin Core
Dublin Core is a standard set of metadata which enables the description of resources for the purposes of discovery. See
- easy_install
A command-line tool for automatic discovery and installation of packages into a Python environment. The
easy_installscript is part of the
setuptoolspackage, which uses the Python Package Index as its source for packages.
- Egg
See Python egg.
- Expiration Date
The last day an item should show up in searches, news listings etc. Please note that this doesn’t actually remove or disable the item, it merely makes it not show up in searches.
This is part of the Dublin Core metadata that is present on all Plone objects.
- GenericSetup
An XML-based configuration system for Zope and Plone applications.
Todo
Add reference.
- gettext
UNIX standard software translation tool. See
- i18n
i18n is shorthand for “internationalization” (the letter I, 18 letters, the letter N) - and refers to the process of preparing a program so that it can be used in multiple languages without further altering the source. Plone is fully internationalized.
- i18ndude
Support tool to create and update message catalogs from instrumented source code.
- JSON
JavaScript Object Notation. JSON is a lightweight text-based open standard designed for human-readable data interchange. In short, it’s a string that looks like a JavaScript array, but is constrained to 6 simple data types. It can be parsed by many languages.
- KSS
Kinetic Style Sheets is a client-side framework for implementing rich user interfaces with AJAX functionality. It allows attaching actions to elements using a CSS-like rule syntax. KSS was added to Plone in Plone 3 and removed in Plone 4.3, because JQuery made it obsolete.
- Kupu
Kupu was the user-friendly graphical HTML editor component that used to be bundled with Plone, starting with version 2.1. It has since been replaced by TinyMCE.
- l10n
Localization is the actual preparing of data for a particular language. For example Plone is i18n aware and has localization for several languages. The term l10n is formed by the first and last letter of the word and the number of letters in between.
- Layer
A layer is a set of templates and scripts that get presented to the user. By combining these layers, you create what is referred to as a skin. The order of layers is important, the topmost layers will be examined first when rendering a page. Each layer is an entry in
portal_skins-> ‘Contents’, and is usually a Filesystem Directory View or a Folder.
- LDAP
Lightweight Directory Access Protocol. An internet protocol which provides a specification for user-directory access by wire, attribute syntax, representation of distinguished names, search filters, an URL format, a schema for user-centric information, authentication methods, and transport layer security. Example: an email client might connect to an LDAP server in order to look up an email address for a person by a person’s name.
- Manager
The Manager Security role is a standard role in Zope. A user with the Manager role has ALL permissions except the Take Ownership permission. Also commonly known as Administrator or root in other systems.
- METAL
Macro Expansion Template Attribute Language. See ZPT.
- Monkey patch
A monkey patch is a way to modify the behavior of Zope or a Product without altering the original code. Useful for fixes that have to live alongside the original code for a while, like security hotfixes, behavioral changes, etc.
The term “monkey patch” seems to have originated as follows: First it was “guerrilla patch”, referring to code that sneakily changes other code at runtime without any rules. In Zope 2, sometimes these patches conflict. This term went around Zope Corporation for a while. People heard it as “gorilla patch”, though, since the two words sound very much alike, and the word gorilla is heard more often. So, when someone created a guerrilla patch very carefully and tried to avoid any battles, they tried to make it sound less forceful by calling it a monkey patch. The term stuck.
- Namespace package
A feature of setuptools which makes it possible to distribute multiple, separate packages sharing a single top-level namespace. For example, the packages
plone.themeand
plone.portletsboth share the top-level
plonenamespace, but they are distributed as separate eggs. When installed, each egg’s source code has its own directory (or possibly a compressed archive of that directory). Namespace packages eliminate the need to distribute one giant plone package, with a top-level plone directory containing all possible children.
- OpenID
A distributed identity system. Using a single URI provider an individual is able to login to any web site that accepts OpenID using the URI and a password. Plone implements OpenID as a PAS plug-in.
- PAS
The Pluggable Authentication Service (PAS) is a framework for handling authentication in Zope 2. PAS is a Zope
acl_usersfolder object that uses “plugins” that can implement various authentication interfaces (for example LDAP and OpenID) that plug into the PAS framework . Zope 3 also uses a design inspired by PAS. PAS was integrated into Plone at the 2005 San Jose Sprint.
-.
- Plonista
A Plonista is a member of the Plone community. It can be somebody who loves Plone, or uses Plone, or someone who spreads Plone and Plone knowledge. It can also be someone who is a Plone developer, or it can be all of the above.
- Product
A Plone-specific module that extends Plone functionality and can be managed via the Plone Control Panel. Plone Products often integrate non-Plone-specific modules for use within the Plone context.
- Python egg
A widely used Python packaging format which consists of a zip or
.tar.gzarchive with some metadata information. It was introduced by setuptools
A way to package and distribute Python packages. Each egg contains a
setup.pyfile with metadata (such as the author’s name and email address and licensing information), as well as information about dependencies.
setuptools, the Python library that powers the egg mechanism, is able to automatically find and download dependencies for eggs that you install. It is even possible for two different eggs to concurrently use different versions of the same dependency. Eggs also support a feature called entry points, a kind of generic plug-in mechanism.
- Python package
A general term describing a redistributable Python module. At the most basic level, a package is a directory with an
__init__.pyfile, which can be blank.
- Python Package Index
The Python community’s index of thousands of downloadable Python packages. It is available as a website to browse, with the ability to search for a particular package. More importantly, setuptools-based packaging tools (most notably,
buildoutand
easy_install) can query this index to download and install eggs automatically. Also known as the Cheese Shop or PyPI.
- Python path
The order and location of folders in which the Python interpreter will look for modules. It’s available in python via
sys.path. When Zope is running, this typically includes the global Python modules making up the standard library, the interpreter’s site-packages directory, where third party “global” modules and eggs are installed, the Zope software home, and the
lib/pythondirectory inside the instance home. It is possible for python scripts to include additional paths in the Python path during runtime. This ability is used by
zc.buildout.
- RAD
Rapid Application Development - A term applied to development tools to refer to any number of features that make programming easier. Archetypes and ArchGenXML are examples of these from the Plone universe.
- Request
Each page view by a client generates a request to Plone. This incoming request is encapsulated in a request object in Zope, usually called REQUEST (or lowercase “request” in the case of ZPT).
- ResourceRegistries
A piece of Plone infrastructure that allows CSS/JavaScript declarations to be contained in separate, logical files before ultimately being appended to the existing Plone CSS/JavaScript files on page delivery. Primarily enables Product authors to “register” new CSS/JavaScript without needing to touch Plone’s templates, but also allows for selective inclusion of CSS/JavaScript files and reduces page load by minimizing individual calls to separate blocks of CSS/JavaScript files. Found in the ZMI under
portal_cssand
portal_javascript.
- reStructuredText
The standard plaintext markup language used for Python documentation:
reStructuredText is an easy-to-read plaintext markup syntax and parser system. It is useful for in-line program documentation (such as Python docstrings), for quickly creating simple web pages, and for standalone documents. reStructuredText is designed to be extensible for specific application domains. The reStructuredText parser is a component of Docutils.
reStructuredText is a revision and reinterpretation of the StructuredText and Setext lightweight markup systems.
- Skin
A collection of template layers (see layer) is used as the search path when a page is rendered and the different parts look up template fragments. Skins are defined in the ZMI in
portal_skinstool. Used for both presentation and code customizations.
- slug
A ZCML slug is a one-line file created in a Zope instance’s
etc/package-includesdirectory, with a name like
my.package-configure.zcml. The contents of the file would be something like:
<include package="my.package" file="configure.zcml" />
This is the Zope 3 way to load a particular package.
- Software home
The directory inside the Zope installation (on the filesystem) that contains all the Python code that makes up the core of the Zope application server. The various Zope packages are distributed here. Also referred to as the
SOFTWARE_HOMEenvironment variable. It varies from one system to the next, depending where you or your packaging system installed Zope. You can find the value of this in the ZMI > Control Panel.
- Sprint
Based on ideas from the extreme programming (XP) community. A sprint is a three to five day focused development session, in which developers pair in a room and focus on building a particular subsystem. See
- STX
- StructuredText
Structured Text is a simple markup technique that is useful when you don’t want to resort to HTML for creating web content. It uses indenting for structure, and other markup for formatting. It has been superseded by reStructuredText, but some people still prefer the old version, as it’s simpler.
Syndication shows you the several most recently updated objects in a folder in RSS format. This format is designed to be read by other programs.
- TAL
Template Attribute Language. See ZPT.
- TALES
TAL Expression Syntax. The syntax of the expressions used in TAL attributes.
- TinyMCE
A graphical HTML editor bundled with Plone.
- TODO
The TODO marker in source code records new features, non-critical optimization notes, design changes, etc.
- toolbar
Plone uses a toolbar to have quick access to the content management functions. On a standard instance, this will appear on the left of your screen. However, your site administrator might change this to have a horizontal layout, and it will appear hidden at first when using a smaller-screen device like a phone or tablet.
- Traceback
A Python “traceback” is a detailed error message generated when an error occurs in executing Python code. Since Plone, running atop Zope, is a Python application, most Plone errors will generate a Python traceback. If you are filing an issue report regarding a Plone or Plone-product error, you should try to include a traceback log entry with the report.
To find the traceback, check your
event.loglog file. Alternatively, use the ZMI to check the
error_logobject in your Plone folder. Note that your Zope must be running in debug mode in order to log tracebacks.
A traceback will be included with nearly all error entries. A traceback will look something like this: “Traceback (innermost last): … AttributeError: adapters” They can be very long. The most useful information is generally at the end.
- traversal
Publishing an object from the ZODB by traversing its parent objects, resolving security and names in scope. See the Acquisition chapter in the Zope 2 book.
- TTP
Actions done TTP are performed “Through the Plone” interface. It is normally a lazy way of telling you that you should not add things from the ZMI, as is the case for adding content, for example.
- TTW
This is a general term meaning an action can be performed “Through The Web,” as opposed to, say, being done programmatically.
- UML
The Unified Modeling Language is a general-purpose modeling language that includes a standardized graphical notation used to create an abstract model of a system, referred to as a UML model. With the use of ArchGenXML, this can be used to generate code for CMF/Plone applications (a Product) based on the Archetypes framework.
- virtualenv
virtualenvis a tool for creating a project directory with a Python interpreter that is isolated from the rest of the system. Modules that you install in such an environment remain local to it, and do not impact your system Python or other projects.
Todo
Add reference.
- VirtualHostMonster
A Zope technology that supports virtual hosting. See VirtualHostMonster URL rewriting mechanism
- Workflow
Workflow is a very powerful way of mimicking business processes - it is also the way security settings are handled in Plone.
- XXX
XXX is a marker in the comments of the source code that should only be used during development to note things that need to be taken care of before a final (trunk) commit. Ideally, one should not expect to see XXXs in released software. XXX shall not be used to record new features, non-critical optimization, design changes, etc. If you want to record things like that, use TODO comments instead. People making a release shouldn’t care about TODOs, but they ought to be annoyed to find XXXs.
- ZCA
The. From A Comprehensive Guide to Zope Component Architecture.
- ZCML
Zope Configuration Markup Language. Zope 3 separates policy from the actual code and moves it out to separate configuration files, typically a
configure.zcmlfile in a buildout. This file configures the Zope instance. ‘Configuration’ might be a bit misleading here and should be thought or more automatically imported and loaded. This is not the case in Zope 3. If you don’t enable it explicitly, it will not be found.
- ZEO server
ZEO (Zope Enterprise Objects) is a scaling solution used with Zope. The ZEO server is a storage server that allows multiple Zope instances, called ZEO clients, to connect to a single database. ZEO clients may be distributed across multiple machines. For additional info, see the related chapter in The Zope Book.
- ZMI
The Management Interface. A Management Interface that is accessible through the web. Accessing it is as simple as appending
/manageto your URL, for example:- or visiting Plone Setup and clicking the Management Interface link (Click ‘View’ to go back to the Plone site). Be careful in there, though - it’s the “geek view” of things, and is not straightforward, nor does it protect you from doing stupid things. :)
- ZODB
The Zope Object Database is where your content is normally stored when you are using Plone. The default storage backend of the ZODB is filestorage, which stores the database on the file system in the file(s) such as
Data.fs, normally located in the
vardirectory.
- Zope instance
An operating system process that handles HTTP interaction with a Zope database (ZODB). In other words, the Zope web server process. Alternatively, the Python code and other configuration files necessary for running this process.
One Zope installation can support multiple instances. Use the buildout recipe
plone.recipe.zope2instanceto create new Zope instances in a buildout environment.
Several Zope instances may serve data from a single ZODB using a ZEO server on the back-end.
- Zope product
A special kind of Python package used to extend Zope. In old versions of Zope, all products were directories inside the special Products directory of a Zope instance; these would have a Python module name beginning with
Products. For example, the core of Plone is a product called CMFPlone, known in Python as
Products.CMFPlone.
- ZPL
Zope Public License, a BSD-style license that Zope is licensed under.
- ZPT
Zope Page Templates is the templating language that is used to render the Plone pages. It is implemented as two XML namespaces, making it possible to create templates that look like normal HTML/XML to editors. See
|
https://docs.plone.org/appendices/glossary.html
|
CC-MAIN-2020-16
|
refinedweb
| 3,465
| 56.25
|
')
The declaration inside the class body is not a definition and may declare the member to be of incomplete type (other than void).
struct Foo; struct S { static int a[]; // incomplete type static Foo x; // incomplete type }; int S::a[10]; // definition, complete type struct Foo {}; Foo S::x; // definition, complete type data members
The static member objects are not part of the object. If the static member is declared thread_local(since C++11), there is one such object per thread. Otherwise, there is only one instance of the static member object in the entire program, with static storage duration. The static members exist even if no objects of the class have been defined.
Static data members cannot be mutable.
Static data members of classes in namespace scope have external linkage.
Local classes (classes defined inside functions) and unnamed classes, including member classes of unnamed classes,
References
- C++11 standard (ISO/IEC 14882:2011):
- 9.4 Static members [class.static]
- C++98 standard (ISO/IEC 14882:1998):
- 9.4 Static members [class.static]
|
http://en.cppreference.com/mwiki/index.php?title=cpp/language/static&oldid=65657
|
CC-MAIN-2015-40
|
refinedweb
| 173
| 56.86
|
This solves LPs using the dual simplex method. More...
#include <AbcSimplexDual.hpp>
This solves LPs using the dual simplex method.
It inherits from AbcSimplex. It has no data of its own and is never created - only cast from a AbcSimplex object at algorithm time.
Definition at line 49 of file AbcS Abcs (copy from ClpSimplexDual)
Create dual pricing vector.
The duals are updated.
The duals are updated by the given arrays.
This is in values pass - so no changes to primal is madeWhile dualColumn gets flips this does actual flipping. returns number flipped
Undo a flip.
Array has tableau row (row section) Puts candidates for rows in list Returns guess at upper theta (infinite if no pivot) and may set sequenceIn_ if free Can do all (if tableauRow created)
Array has tableau row (row section) Just does slack part Returns guess at upper theta (infinite if no pivot) and may set sequenceIn_ if free.
Do all given tableau row.
Chooses incoming Puts flipped ones in list If necessary will modify costs.
Chooses part of incoming Puts flipped ones in list If necessary will modify costs.
This sees what is best thing to do in branch and bound cleanup If sequenceIn_ < 0 then can't do anything.
Chooses dual pivot row Would be faster with separate region to scan and will have this (with square of infeasibility) when steepest For easy problems we can just choose one of the first rows we look at.
Checks if any fake bounds active - if so returns number and modifies updatedDualBound_ and everything.
Free variables will be left as free Returns number of bounds changed if >=0 Returns -1 if not initialize and no effect fills cost of change vector
Fast iterations.
Misses out a lot of initialization. Normally stops on maximum iterations, first re-factorization or tentative optimum. If looks interesting then continues as normal. Returns 0 if finished properly, 1 otherwise.Gets tableau column - does flips and checks what to do next Knows tableau column in 1, flips in 2 and gets an array for flips (as serial here)
see if cutoff reached
Does something about fake tolerances.
Perturbs problem.
Perturbs problem B.
Make non free variables dual feasible by moving to a bound..
Ending part of dual.
|
https://www.coin-or.org/Doxygen/Clp/classAbcSimplexDual.html
|
CC-MAIN-2021-21
|
refinedweb
| 375
| 64.91
|
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hello,
I would like to capture the issue closed date and time to a custom field, I have created the custom field as Date and time picker and added the post function to write the date to the custom field.
I have added the groovy and getting the Close date but its showing as "2 days ago", "1 day ago" and 1 minute ago its not give the data. When i pull the report we are not getting any output in the CSV.
Please find the Groovy script and the custom field value.
import com.atlassian.jira.component.ComponentAccessor
import java.sql.Timestamp
def customFieldManager = ComponentAccessor.getCustomFieldManager()
def dateCf = customFieldManager.getCustomFieldObject("customfield_10104") // Date time fields require a Timestamp
issue.setCustomFieldValue(dateCf, new Timestamp((new Date()).time))
Please find the Screenshot for the same
Can you please help me to solve this issue.
Hi John,
In the CSV we are able to see the date and time, but in the JQL we are seeing only the # of days or mins
So the CSV is good. Did you turn off the Relative dates based on the article above?
Thank you very much, after setting
jira.lf.date.relativize = false its showing the Date and time.
|
https://community.atlassian.com/t5/Jira-Software-discussions/Get-the-close-date-to-Custom-field/m-p/1374712
|
CC-MAIN-2021-49
|
refinedweb
| 218
| 66.23
|
Hi,
Please welcome a new CLion 2018.3 EAP (build 183.2940.13)!
Last week CLion 2018.3 Early Access Program started with the Initial Remote Development support. This week a new build is available and it addresses an issue with the symlinks used in header search paths (CPP-14211).
Talking about the header search paths, we’ve also fixed an issue with searching through the user and libraries header search paths:
#include <..>now looks for the header files in the libraries only.
#include ".."first looks in the user’s search path, then in the libraries
This fix actually means that the user header file shouldn’t hide a library header by mistake.
There are also several UI fixes across the whole IntelliJ platform, like for example drawing characters in Monospaced font was fixed (JRE-847).
Full release notes are available by the link.
Your CLion Team
JetBrains
The Drive to Develop
PLEASE PLEASE don’t break existing great things….CPP-14300 is very very silly and it’s a basic VCS feature, it worked perfectly until now since 2016.
Can’t find that one. Probably was moved to IDEA tracker already, do you have a link not just a number?
Sorry Anastasia, but it’s frustrating when a thing worked perfectly years and now it’s broken.
Good luck at CppCon!
Thanks, we’ll check that
Is it planned to enable SSH remote debugging (without gdbserver) for local projects (or at least in the attach-to-process scenario) in 2018.3? I see that SSH remote debugging does work for remote projects, but it only works when the project is built and started with CLion. For projects that only build outside of CLion with a different build system, it appears that there is no option for SSH debugging. Only gdbserver-based remote debugging works (which requires transfer of gigabytes of symbols, making it useless).
It seems like almost all of the work to support general-purpose SSH remote debugging is already done, so I’m assuming (maybe incorrectly) that it wouldn’t take much to support it for binaries built or started outside of CLion.
This is definitely among the future plans. However, can’t estimate it now.
What’s the plan for ctest support? Would like to switch from qt creator, but this is holding me back.
We had a plan to start it, but then it was postponed as we detected some critical performance issues with unit test integration and spent all our efforts on reimplementing some parts to improve the performance. Meanwhile, this is still in plans. Likely for 2019.
|
https://blog.jetbrains.com/clion/2018/09/clion-2018-3-eap-fixes-for-remote-dev-and-more/?replytocom=73851
|
CC-MAIN-2019-22
|
refinedweb
| 434
| 73.58
|
POA’s token of WHO lists snake bites Battle for Firms lose land in
appreciation amongst top 20 killers telecom market industrial parks
Issue No. 533 Aug 10 - 16, 2018 Ushs 5,000,Kshs 200, RwF 1,500, SDP 8
Fresh storm at
Bank of Uganda
Mutebile, Kasekende fight
cited in Mobile Money tax saga
Finance summons Mutebile
Staff up in arms over mess in
hiring process
YOU BUY THE
TRUTH
WE PAY THE
PRICE
Independent Publications Limited, Plot 82/84, Kanjokya Street, P. O. Box 3304, Kampala, Uganda Tel: +256-312-637-391/ 2/ 3/ 4 Fax: +256-312-637-396
INBOX
Incumbents
MTN Airtel
Challengers
Africell UTL
Struggling
Smile Smart
Quitting
Vodafone
Cover story
Fresh storm at BoU
4 The Week
39 Comments
Mnangagwa in talks with
Chamisa after narrow win Funding Family Planning: Pallisa
district should be commended
9 The Last Word
Uganda’s incompetence paradox II:
How Uganda performs well in spite 36 Health
of corruption and incompetence and WHO lists Snake bites
what it teaches us amongst top 20 killers: In
Uganda, many antivenoms
do not work, all snakebite
14 Analysis first aid information is wrong
Politics of tribe and religion in Arua
Municipality by-election
39 Arts & Culture
29 Business
Sustainable careers and decent
Battle for telecom market: Bigger firms livelihoods for artists
choke smaller and new telcos
STRATEGY & EDITORIAL DIRECTOR: Andrew M. Mwenda WRITERS:Ronald Musoke, Flavia Nassaka, Ian Katusiime,
MANAGING EDITOR: Joseph Were Agnes Nantaba, Julius Businge.
INVESTIGATIONS EDITOR: Haggai Matsiko DESIGN/LAYOUT: Sarah Ngororano
BUSINESS EDITOR: Isaac Khisa CARTOONIST: Harriet Jamwa
PHOTOGRAPHER: Jimmy Siya:
Regional Heads of
State (L-R) President
Uhuru Kenyatta, of
Kenya, President
Museveni of Uganda
and President
Omar al-Bashir of
Sudan witnessing
the South Sudan
Peace agreement “We may disagree politically but the
in Khartoum on what I call maturity in leadership.”Amama
August 5.
Mbabazi, former Prime Minister during the
giveaway ceremony of his niece
President Museveni,(M)
with Ashanti King,
Asantehene Otumfou Nana
Osei Tutu II from Ghana,
the Kabaka of Buganda
Ronald Muwenda Mutebi
at State House, Entebbe
on August 1. Asantehene
visited Uganda in honor “I hope that this time, the
of the 25th coronation respective parties in South
anniversary of the Sudan are serious and they do
Kabaka.
not use the ceasefire as tactical
instruments of preparing for
war”President Museveni on the new
peace deal for South Sudan
Community health
23 900 15000
Traffic commanders who The first batch of
CCTV cameras to extension workers
were reshuffled by the
be used for security the government is
Inspector General of
in Kampala going to recruit
Police Okoth Ochola
Aug 10 - 16, 2018 3
week
Museveni wins global peace award
President Yoweri Kaguta oil as the leading cause of
Museveni was on Aug.02 war. He also said Africa
awarded a Global Peace has been a target by greedy
Prize for his work in elements because of its
ensuring the return of approach of sitting back
peace in countries of the other than defending what
great lakes region. Handing belongs to them.
him the award during a The sixth Global Peace
conference held at Speke Leadership Conference
Resort in Munyonyo, the (GPLC) held under
President of the Global the theme, “Moral and
Peace Foundation, Dr. Innovative Leadership,
President Museveni with the award
Moon Hyun Jin thanked New Models for Sustainable
the President for taking to make in Somalia where delegates on the drivers of Peace and Development”
part in the reconciliation the army has been deployed conflict in Africa and the was aimed at promoting
talks in the neighboring under the peace keeping World where he pointed innovative values-based
South Sudan and also the mission – AMISOM. out that greed for natural approach to peace building.
contribution he continues Museveni addressed resources like minerals and
I
n 2013 I wrote an article with exactly diversified her exports across a large basket are not informed. They claim that this is
this headline, the reason this is named of goods. because government and President Yoweri
“II". It was about how the state in For example, in 1990 coffee was the Museveni make no effort to publicize their
Uganda exhibits gross corruption and leading foreign exchange earner and con- achievements. I am inclined to believe that
incompetence and yet in many aspects the tributed 80.3% of total export receipts. this is a small part of a bigger story.
country performs well. What explains this? Today coffee is still the leading foreign I meet many Uganda government officials
I want to suggest that the state in Uganda exchange earner but in 2017 it contributed and share these success stories with them.
has been successful in large part because only 15% of total export receipts, followed However, hardly anyone feels proud of
it divested itself of the responsibility to do by gold at 13%. None of the other export them including officials of the ministry of
many things leaving individuals in our so- commodities contributes more than 4% trade who should claim the credit. Often
ciety the freedom to pursue their talents in of our export revenue. This means that they express disbelief and move on. Why is
the market. the prices of individual commodities in this the case?
I argued in this column last week that the the basket of our exports can fall without I think the answer lies in how Uganda has
main factor behind economically successful inflicting any significant harm on our achieved its export performance. Govern-
countries is the rate of growth in the value overall export earnings. ment liberalized trade and for the most part
of their exports relative to the rate of growth This export performance is most manifest did very little else. And this may be a good
in the cost of her imports – or what econo- in Uganda’s trade with her neighbors. In thing – to do nothing. Where a particular
mists call “terms of trade”. Uganda still has 1995, Uganda exported goods worth $16m government regulation obstructs them,
a huge trade deficit. However, the deficit to Kenya and imported goods worth $280m businesspeople just bribe their way out of
has been declining in relative terms even from there. In 2016, this figure was $483m it. So officials in government are right not to
though the absolute figure has grown over to $485m. In twenty years, Uganda has be elated by Uganda’s export performance
the years. closed her trade deficit with Kenya that was because they have done nothing to facilitate
In 1991, Uganda exported goods worth 18 times larger. In 1996, Uganda exported it. Ugandans are frustrated perhaps because
$184m. In 2017, it was $3.35 billion i.e. it had $38m to DRC, $24m to Rwanda, $4.5m to they do not see their government doing
grown 18 fold at an annual average rate of Tanzania and $23.3m to Sudan. Then things much to promote trade. However, they are
65%, an impressive performance by historic went bad. By 2003, Uganda’s exports to wrong to think such handoff approach is a
standards. Meanwhile, in 1991 Uganda’s Kenya had grown to $78m, for Tanzania wrong policy. It is a winner.
import bill was $522m. In 2017 it was $5 stagnated at $5m, while they declined for There is an even bigger argument to make
billion. So our imports have grown 10 fold Rwanda to $20m, DRC to $12m and Sudan regarding Uganda’s attitude towards her
at an annual average rate 33%. Uganda’s to $13m. international trade. For example, Kampala
export earnings have grown twice as fast I do not recall government of Uganda has recently developed a frosty relationship
as her import bill. Until I researched this, I doing anything to reverse this trend. How- with Kigali over issues we should devote
thought the reverse was the case. ever, in 2016, Uganda exported goods our best diplomatic efforts to solve. Why?
To appreciate Uganda’s success we need worth $400m to DRC, $226m to Rwanda Rwanda is the 4th largest destination for our
to remember that the biggest challenge (down from $270m in 2014), $112m to Tan- exports and we enjoy a trade surplus with
poor countries have historically faced is zania, $281m to South Sudan (down from her worth $212m. It is, therefore, in our
reliance on a few products for a huge share $400m in 2014). In 2016, Uganda imported interests to consistently court Kigali as any
of their export earnings. Thus, whenever goods worth $72m from Tanzania, $22m businessman would his good customer.
there was a sudden fall in the price of this from DRC, $12.6m from Rwanda and Yet I often meet Ugandan officials who
single commodity, the country’s economy $157,000 from South Sudan. argue that Rwanda should be fought or
would buckle. In 2015, this is exactly what Thus, Uganda has cut her trade deficit ignored, an attitude that is destructive
happened to Nigeria, Angola, Equatorial with Kenya to a miniscule $2m and enjoys to our national interest. It should be the
Guinea and Chad, which depend on oil a trade surplus with the rest of her neigh- policy of Kampala to ensure Kigali never
for a huge share of their export earnings bors. Actually in real terms (after adjusting turns her gaze towards Tanzania and/or
(as Zambia does on copper): international to inflation), Kenya has grown her exports elsewhere for her imports because that
prices of oil and copper fell causing huge to Uganda by only 4% in 23 years, while would hurt our country. I suspect we do
revenue losses. Uganda has grown hers to Kenya by 1,800% not value trade with Rwanda because our
I used to think, on the basis of a large over the same period. exports to it have not grown out of our
body of development economics litera- Many Ugandan elites believe Uganda is hard work whose benefits we would be
ture, that the only effective way a country grossly mismanaged! I share this sentiment. careful not to undermine.
can successfully diversity her exports, is But facts like these only show that our senti-
through manufacturing. Over the last 25 ments are wrong. So why are Ugandans amwenda@independent.co.ug
years, Uganda has remained a producer angry with a government that outperforms
and exporter of unprocessed products. her peers? Some of my friends argue that
However, in spite of this, it has successfully Ugandans are frustrated because they
O
By Haggai Matsiko Mutebile expressing concerns about the However, Abuka later apologized. At
statements, sources at State House told one point, the Secretary to the Treasury,
n Aug.3, at exactly The Independent. The following day, Keith Muhakanizi, who also attended the
8:45am, Central Bank Kasaija summoned Mutebile. Mutebile meeting, came to Abuka’s rescue.
Governor, Emmanuel was also furious. He had already ordered He said that when some of these
Tumusiime Mutebile Abuka to explain his statements at civil servants go to parliament, they get
arrived at the Finance parliament. Later, the Governor’s office intimidated and end up saying what they
Ministry, where he had informed the officials that they had been are not supposed to say.
been summoned for a meeting at exactly summoned by Finance because of the What Muhakanizi did not know is
9:00am. remarks they had made. that when Abuka made the remarks in
Finance Minister, Matia Kasaija, had While the governor arrived at Finance parliament, they kicked off an informal
asked him to come and explain why his 15 minutes to 9:00 am, Abuka and his investigation within the walls of BoU.
team had criticized the mobile money team arrived late—several minutes past How could these officials present
tax on Aug.1 while appearing before 9:00 am, sources at the Finance Ministry a position contrary to that of the
parliament’s finance committee. told The Independent. Governor? Officials wondered. Phone
Bank of Uganda’s Dr. Charles Abuka, State Minister for Planning, David calls were made. Then word started
the Director, Statistics Department, Bahati chaired the meeting, which took circulating that Abuka passed by the
had told the parliamentary committee place in the Minister’s Boardroom. Deputy Governor’s office before he
that the tax on mobile money was At the meeting Bahati asked Mutebile went to parliament. The conclusion was
“discriminative”, “unfair”, and a threat to explain why the BoU team while in that the position changed between the
to growth of financial inclusion. parliament had criticised the mobile Governor’s office and that of his deputy.
Aomu Mackay, the Director National money tax yet they (BoU) had been part At the central bank, Abuka’s position
Payments, Ivan Setimba, Deputy of a string of meetings that had agreed was seen as part of the simmering
Director, National Payments in charge that the tax was critical for government institutional politics, which some say are
of financial inclusion, and Christine to fund the budget—the mobile money aimed at undermining the authority of
Namanya, the deputy director research tax was expected to raise some Shs. 115 the Governor.
escorted Abuka to parliament. billion. In the past, The Independent has
Abuka’s statements sparked outrage Mutebile responded that his office reported about a power struggle between
in some quarters at BoU, the Finance had communicated the BoU position to the Governor and his deputy. When The
Ministry and State House. The comments the team that appeared in parliament, Independent revealed this in a series
appeared a major slap in the face of however, he did not know where they of articles early this year, the Governor
officials at Finance and President got the position they presented. He said dismissed the reports.
Museveni who have intensively labored he had not cleared that position and that The two officials have even had lunch
to explain and get Ugandans to accept it was not the BoU position. dates to try and bury the hatchet. But
the tax. Then the Governor turned to Dr. tensions have remained mostly amongst
As part of the budget, the Finance Charles Abuka and asked him to explain their proxies. The central bank is
Ministry levied a one percent tax on the where the position he presented had divided into two camps—those loyal to
value of mobile money transactions but come from. Abuka also distanced himself Kasekende and those loyal to Mutebile.
following intense public backlash, the from his own position, according to While Mutebile heads the bank, the
president directed that it be whittled sources at the Finance Ministry, who are way his roles and those of his deputy
down to 0.5 percent. Telecom companies knowledgeable about what transpired. are structured give the latter immense
and mobile money dealers have been Abuka said that the media had powers.
lobbying to have the tax scrapped or misquoted him. They only reported the Most importantly, as Deputy
reduced again. Instead of making a case bit where he was discussing elements Governor, Kasekende is directly in
for government, Abuka appeared to bat that make a good tax and how the tax charge of Finance and Administration,
for the other side. affected financial inclusion. The media, and by extension the day to day running
Enraged, President Museveni called he said, just picked that. of the bank. This has given him a vantage
H
ospital Road, in Arua town, is
a heavily canvassed street. The
residents of the street are used to
endless processions of supporters
walking on foot and others in trucks using
loud public address systems convincing
voters to vote for particular politicians.
In June, the street was the center of chaos
when the body of slain Arua Municipality
Member of Parliament Col. Ibrahim Abiriga
arrived from Kampala.
Now, as locals prepare to vote for his
replacement in parliament, in a by-election
scheduled for Aug.15, the street is witness-
ing another round of intense electioneering
action as 12 candidates compete for the slot.
Apart from four candidates standing as
flag bearers for political parties – NRM,
JEEMA, FDC and DP, the rest are Inde-
pendents. FDC has two strong candidates
– Bruce Mudhafir Musema the flag bearer
who came second in the 2016 race and
former Terego County MP Kasiano Wadri,
who lost his seat in 2016. There are fears
this might divide the opposition vote come
Wednesday.
However, for the acting chairman of the
FDC Electoral Commission, Hajji Abdul
Hakim Moli, this is ‘a minor issue’ in the
politics of Arua. What shapes politics here,
Moli says, are always tribe and religion
and recently the politics of the elite and the
illiterates.
“Most of the candidates have set their
campaign basing on tribe”, he said. For
instance, he says that when Kasiano arrived
here, other than coming to the party, he
called a meeting of Terego leaders across
the political divide. Apparently, the Terego
have some good population in the munici-
pality.
Other candidates are following the same
trend.
When The Independent asked Musema
about his plan for victory, he was quick
to mention that he has already won the
race since he is a son of the land. “I am an
Aringa. I have so many supporters here.
These others come from the neighboring
constituencies, Koboko, Terego,” Musema
said referring to the NRM candidate Nusra
Tiperu and Kasiano, his biggest competi-
A campaign procession along hospital road in Arua Town INDEPENDENT/flavia nassaka
tors.
FDC President Patrick Amuriat at the party delegates conference in November 2017. INDEPENDENT/JIMMY SIYA
O
on Statutory Authorities and State Entre- dent Mugisha Muntu, who lost the presi-
f all the reasons Patrick Oboi Am- prises (COSASE), and Moses Kasibante, the dency to Amuriat.
uriat, the president of Forum for Lubaga North MP, his vice. Generally, Amuriat’s changes are being
Democratic Change (FDC), has Budadiri West MP Nandala Mafabi seen as a token of appreciation to his loyal-
given to explain the changes he bounced back at the Public Accounts Com- ists and a purge of those that are loyal to
has made with the opposition leadership in mittee (PAC), William Nzoghu, Busongora the rival group. Some insiders say these
parliament, one is most believable at least County North, a representative in the Pan- changes might be the final push that kicks
according to party insiders. African Parliament, Franka Akello, Agago Muntu’s camp out of FDC.
“The changes were informed by a num- Woman MP, the new head of the committee Most notably, Amuriat fired Winnie
ber of factors including loyalty to the party on Local Government, Gilbert Olanya of Kiiza, who was the LOP and Abdu Katuntu,
and contribution to the party financially,” Kilak South, became her deputy and Kaps who headed COSASE.
Amuriat told The Independent. Hassan Fungaroo, Obongi County MP, the This was not surprising. Amuriat’s camp,
In these changes, Betty Aol Ochan, new head of the Government Assurances the defiance camp, has been criticized for
emerged the new Leader of Opposition, committee. being highly intolerant. Indeed, when
Francis Mwijukye, Buhweju County MP, a Ibrahim Semujju of Kira Municipality Amuriat said that he was guided by party
parliamentary Commissioner, Roland Kag- became the Opposition Chief Whip. That loyalty in picking the new group, some
inda, a former deputy LOP became a mem- Nganda was on the list surprised many interpreted his statement to mean that he
ber of the global Inter Parliamentary Union given that unlike all the other appointees, didn’t consider those he fired loyal to the
(IPU), Mubarak Munyagwa, the Kawempe he is a supporter of the organization camp, party.
Former Prime Minister Amama Mbabazi hands over the daughter to the family of the President of South Africa, Cyril Ramaphosa. Andile and Birungi in a group photo with
G
Museveni’s wife, Janet and Mbabazi organizing the function.
lamour, pomp, colour and a sat next to each other and chatted away Not even the early morning gloomy
seamless fusion of two African as the ceremony went on, occasionally weather could dampen the thrilling
traditions were on show as smiling and nodding. ceremony which normally precedes the
South African President Cyril The guest list read like a “who is wedding in western Ugandan tradition.
Ramaphosa’s son, Andile, got introduced who” of Uganda’s political aristocracy. As the day grew on and the overcast
by his Ugandan fiancée, Bridget Birungi, Emmanuel Tumusiime Mutebile, the clouds cleared, business and public
at a traditional marriage ceremony Bank of Uganda Governor, Chief Justice transport got interrupted, security
known as “Kuhingira” on Aug.4 in Bart Katureebe and former information tightened and some roads were closed as
Kampala. minister, Jim Muhwezi, as well as a 150 strong Ramaphosa-led entourage
The ceremony attended by high-profile business moguls, Gordon Wavamunno and host, President Yoweri Museveni,
guests was held at the residence of the and Sudhir Ruparelia, were some of the made way to Amama Mbabazi’s home to
bride’s uncle, John Patrick Amama guests. attend the traditional ceremony. Ex-wife,
Mbabazi, a former Prime Minister of Other high-profile dignitaries Hope and First Lady, Dr. Tshepo
Uganda in the leafy upscale Kololo included; the Archbishop Stanley Ramaphosa, escorted the South African
suburb. Ntagali, Prince Kassim Nakibinge, president.
Birungi was born to the late Shadrack businessman Martin Aliker, former FDC Ramaphosa’s son, Andile, had been to
Rwakairu and Peace Ruhindi but Amama President Maj. Gen. Mugisha Muntu Mbabazi’s home in May this year when
took over the raising of Birungi after her and Buganda Katikkiro, Charles Peter Bridget introduced him for the first
father’s death in the early 1980s. Mayiga. time. But that ceremony is usually low
Political rivalries were shoved aside as Amos Nzeyi, Prime Minister key as it only serves to make the man’s
President Yoweri Museveni and his long- Ruhakana Rugunda, Mathew Rukikaire, intentions known to the girl’s parents.
term friend-cum-political rival; Amama Hope Mwesigye and Brig. Timothy That ceremony also gives both families to
Mbabazi met and reminisced about the Mutebile, the brother to the BoU negotiate the bride price.
their families and other guests. President Ramaphosa gives a speech at the ceremony
This time round, it became clear daughter firstly and we also accept her as On his part, Museveni welcomed
Ramaphosa’s entourage had finally been our son’s beloved wife and I can assure Ramaphosa who he described as a
accepted into the Mbabazi household, you that the two of them are deeply in freedom fighter and sharp businessman.
when the leader of their entourage, a love.” He also applauded him for being
middle-aged lady was offered a drink. Ramaphosa challenged his daughter- appreciative of the African heritage.
She had a gulp of the traditional beer in-law to hold tight to her husband. This was in reference to Ramaphosa’s
served from a long-necked gourd as a “Many girls in South Africa are envious visit to one of Museveni’s farms, years
cacophony of drums, traditional music of you, even on social media, they were back, where he reportedly picked
and the vigorous Kigezi dance engulfed saying, why did he go so far when we interest in the long-horned Ankole cattle.
the home. are here?” “They don’t know that you He would later return and acquire 43
Andile and Bridget, both in their late fell in love in China. Hold onto your man of them for his farm. At the function,
thirties, reportedly met in China while and never let him go.” both men talked about the cattle and
studying at the Hong Kong University of Ramaphosa also said weddings are a Museveni joked that Rwamaphosa under
Science and Technology. Andile currently consolidation of relationships not only paid him for the cattle.
works with Blue Crane, a Johannesburg- between families but also people and Ramaphosa said the cattle had since
based financial advisory and investment communities. “In this case, we believe multiplied into a huge herd both in
firm. it is a consolidation of the relations Kenya and South Africa. Ramaphosa also
“We met almost ten years ago. I was between South Africa and Uganda and revealed that he had since become the
in Beijing where I worked as an expert this is going to cement us even more.” president of the South African Ankole
while she was finishing her studies in Ramaphosa had touched down at Cattle Association.
engineering. Then she went on to do her Entebbe International Airport a day The ceremony presented an
Masters. We went on to live in South earlier, on Aug. 3, and held a brief opportunity for Mbabazi to catch-up
Africa.,” said Andile. meeting with his counterpart, Yoweri with his erstwhile colleagues in the
Mbabazi who handed over Birungi to Museveni, at State House Entebbe. ruling NRM government where he
the father of the groom, expressed joy Museveni and Ramaphosa discussed served in various capacities for decades
that Ramaphosa’s family had met all the issues related to the socio-economic and until he was fired three years ago.
set conditions. political affairs affecting the two sister Mbabazi went on to contest against
“We are very happy to fulfill part of countries and Africa at large. Museveni in the 2016 presidential
the handing over to you the hand of our Ramaphosa applauded the leadership election, emerged third and
daughter, Bridget Birungi,” Mbabazi of Uganda for making the relationship unsuccessfully challenged the results in
said. meaningful and supporting South Africa the Supreme Court. He has since kept a
On receiving his daughter-in-law, in its earlier days of the struggle against low profile.
Ramaphosa said, “We accept her as our Apartheid.
By Amy Hawkins
China’s Big Brother
D
aily life in China is gated by
Chinese
Chinese) are willing to make an
investment in a market as volatile as
police
But if it goes through, it will enable first time—Chinese companies can
Zimbabwe, a country with a bleak afford to take risks. CloudWalk itself
record on human rights, to replicate was the recipient of a $301 million
parts of the surveillance infrastructure grant from the Guangzhou municipal
that have made freedoms so limited government.
in China. And by gaining access to “We are concerned about the deal,
a population with a racial mix far given how CloudWalk provides
different from China’s, CloudWalk facial recognition technologies to the
will be better able to train racial biases Chinese police,” said Maya Wang, a
out of its facial recognition systems—a senior China researcher for Human
problem that has beleaguered facial Rights Watch. “We have previously
recognition companies around the documented (the Chinese) Ministry
world and which could give China a of Public Security’s use of AI-enabled
vital edge. technologies for mass surveillance
The CloudWalk deal is built on the that targets particular social groups,
back of a long-standing relationship such as ethnic minorities and those
between former Zimbabwean who pose political threats to the
President Robert Mugabe’s regime, government.”
seen by China as an ideological ally, Some Zimbabweans are concerned
and Beijing. His successor, President about how their data will fare in
Emmerson Mnangagwa was sworn China. Andy, who asked that only his
into office in November 2017 after first name be used, is studying for a
a military coup forced Mugabe to Ph.D. at Beijing Normal University.
resign after 37 years of increasingly For him, “the question is what the
essentially reliant on Chinese repressive rule. Activists feared Chinese company will do with our
companies for their telecoms and that Mnangagwa, Mugabe’s former identities. … It sounds like a spy
digital services. Transsion Holdings, consigliere, would continue the game.” He also says that he “know(s)
a Shenzhen-based company, was the patterns of his predecessor, especially for a fact” that “the Zimbabwe
No. 1 smartphone company in Africa if his regime is backed up with new government will use this tech to try
in 2017. ZTE, a Chinese telecoms giant, security technology. and control people’s freedom.”
provides the infrastructure for the The deal between CloudWalk and In Zimbabwe, freedom of expression
Ethiopian government to monitor its the Zimbabwean government will not has long been curtailed or monitored
citizens’ communications. Hikvision, cover just CCTV cameras. According by various means. In 2015, Mugabe
the world’s leading surveillance to a report in the Chinese state accepted a gift of cybersurveillance
camera manufacturer, has just opened newspaper Science and Technology software from the Iranian government,
an office in Johannesburg. Daily, smart financial systems, airport, including IMSI catchers, which are
The latest is CloudWalk Technology, railway, and bus station security, and used to eavesdrop on telephone
a Guangzhou-based start-up that has a national facial database will all be conversations. In 2016, he cited
signed a deal with the Zimbabwean part of the project. The deal—along China as an example of social media
government to provide a mass with dozens of other cooperation regulation that he hoped Zimbabwe
facial recognition program. The agreements between Harare and could emulate.
agreement was put on hold until after Chinese technology and biotech
Zimbabwe’s elections on July 30. firms—was signed in April. Like every Source: FP magazine
Japanese-funded water
facility saves lives in Rubirizi
Shs240m water pump aids 3000 households in 3 sub counties
By Ian Katusiime
R
ubirizi is a little known district
located in western Uganda with
rolling hills and 32 crater lakes.
Part of the district has sections
of the famous Queen Elizabeth National
Park. Away from these attractive features,
however, is a dark past. Rubirizi has
grappled with a water crisis and lost lives
over the years as residents struggled to fetch
water from Lake Kako, one of the many
crater lakes in the area.
Residents of Ryeru Sub-County drowned
in the lake uncontrollably. “What hurt
us most is that these people were dying
of something preventable,” says John
Mubangizi, the vice chairman of Rubirizi
district. “On average, six people were dying
per year especially women and children.”
The steep to Lake Kako is 500metres long, Journalists at the water pump in Mushumba village in Rubirizi on July 26. COURTESY PHOTO
which made the entire exercise of fetching
water more arduous. The acute water The Independent visited Mushumba village As a result, the organization empowers
shortage also spread dysentery, cholera on July 26 as part of a press tour organised parents so they can take care of their
and had other undesirable effects such as by JICA for its projects in western Uganda, children. “Previously the farmers were
children reporting late to school. residents could not hide the excitement depending on rain-fed agriculture and
After so much agony and helplessness, brought by the water facility. It was a far now they can plant off- season thanks to
Mubangizi and residents brainstormed cry from when they could not even have a irrigation,” Musika adds. He says most
on how to find a lasting solution to the borehole because the rocks in the area are climate change efforts are aimed at tree
tragedy. They wrote to so many donors permeable. planting but the one by Save The Children
requesting that they fund a water project is aimed at livelihood improvement.
and luckily, the Japanese International Irrigation boosts Kasese farmers After it conducted vulnerability
Cooperation Agency (JICA), heard their Nyabubale Community Irrigation assessments with the Kasese district local
plea and donated Shs.241million. As a scheme in Kasandaara sub-county, Kasese government, Save The Children decided
result, Mushumba water production project is another venture where the government to help “last mile farmers” in Kasese. The
was established. Construction started in of Japan is involved in livelihood farmers are referred to as such because
2011 and ended in 2013. Later, the Ugandan improvement in Uganda. Teaming up with they hardly receive any focus yet they
government provided Shs145million for Save The Children, an international NGO are affected by disasters like flash floods,
completion of the works. The government focused on children’s rights, and the local landslides and prolonged dry spells.
was initially skeptical on the viability of the government of Kasese, JICA has enabled The aim of the scheme is to strengthen
project in spite of the pressing need for safe 100 farmers to graduate from just growing the resilience of the communities through
water use. maize to take on high value crops like water good agronomical practices. A poor parent
Water production in Mushumba village melons, onions and tomatoes. means the child will lack basic needs,
is a community effort where residents dug Sam Kahiigwa is one of the farmers Musika explains.
trenches, constructed access roads and is reaping from the irrigation scheme. When Other interventions by JICA in western
managed through water use committees, The Independent visited his farm on July Uganda include a girls’ dormitory built
which are made up of three women and 26, he said he would be selling off two full for Kahinju Secondary School in Kabarole
two men. trucks of water melon in two weeks’ time. district. The dormitory contains 200 beds
The production facility comprises His farm is on 14 acres and is responsible with mattresses and will be opened for
a pump house, reservoir tanks, and a for the new lease of life Kahiigwa and others the third term, which starts in September.
sand filtration system. “There is a user have got. He has built a two bedroomed Lillian Asiimwe Olimi, the head teacher
fee of Shs100,” says Mubangizi, now the house and says he is comfortable knowing of Kahinju SS, says 25 girls dropped out
coordinator of the project. The fee is for he can support the education of his of the school this year due to walking
maintenance such as the diesel-powered children. “The environment of the child is long distances among other challenges
pump. Water has now been extended the focus,” says Vian Musika, the Program associated with staying away from school.
further and there are 30 distribution points Manager of Disaster Risk Reduction and The dormitory appears a major step at
in total that serve 3000 households. When Climate Change Adaptation at Save The preventing such situations.
Children in Kasese.
G
overnment is now focusing on ex-
porting its textiles to Europe after
U.S. President Donald Trump is-
sued a decree suspending Rwanda
from exporting apparels to the US market
duty-free under the African Growth and
Opportunity Act (AGOA) scheme.
The ban was announced in a statement
released last week by U.S.’s deputy trade
representative, C.J. Mahoney. The move is a
reaction to Rwanda’s stand against importa-
tion of secondhand clothes and shoes into
the country.
The Rwandan government imposed an
import levy on secondhand garments of $4
per kilogramme this fiscal year, up from
$2.5 last year and a $5 per kilo on used
shoes, up from $0.4, last year to support it
nascent textiles and leather industries.
The country earned $1.5 million in 2017
from apparel and shoe exports to the US, C&H Garments workers at the firm's Kigali Special Economic Zone plant
which accounts for only about 3% of Rwan-
da’s total exports to the country. Rwanda with C&H Garments contributing 90%. think the impact is not something we can
remains eligible to export non-apparel Munyeshyaka explained that other not manage” she said.
products under AGOA despite this suspen- Rwandan products would enter the US “We expect some Rwandan companies to
sion. market under other agreements like Gen- be affected and we have a plan for them. We
eralized System of Preferences (GSP) that is have engaged them and we will be helping
Rwanda to enter new markets also tax exempt. with the transition to new markets.”
But Rwandan government says it will He explained that Rwanda could, how-
not be bullied into reversing its position on ever, export apparels to the US subject to Textile sector players speak out
secondhand clothes, opting to seek new taxation. “We were not banned from export- As local textile players wait to see this
markets in Europe, Asia and within the ing apparel and footwear products to the assistance, Rwanda Clothing’s
region. Speaking to The Independent, Min- US, we were only denied duty-free exporta- production manager, Antoinette Twa-
ister of Trade and Industry Vincent Munye- tion,” he clarified. girayesu, says that the firm is focusing on
shyaka said that the companies, which used EAC countries had all proposed to gradu- growing locally, adding that it already
to export apparel and footwear products ally ban importation of secondhand clothes serves a growing clientele from US and
under the AGOA arrangement have already and shoes, but later other countries – Kenya, Europe who order for the clothes directly.
started entering new international markets. Uganda and Tanzania - balked under heavy Twagirayesu said; “We are going to focus
“For example, C&H Garments has a pressure from US. on promoting Made-in-Rwanda clothing,
secure market in the United Kingdom and A 2017 petition by the Secondary Mate- which still has huge growth potential.”
German,” he said, adding that the firm and rials and Recycled Textiles Association
other textile makers are being encouraged (SMART), to the US government assert- Expert commends Rwanda’s decision
to explore the big opportunities in the local ing that the EAC’s decision undermined Amir Ben Yahmed, the chief executive
market. the AGOA criteria and their investments, officer of the Africa CEO Forum, hailed
Munyeshyaka assured sector players sparked off the suspension. Rwanda’s decision, saying that the suspen-
of government’s support in case the ban While commenting on the suspension last sion “should not be regarded as a business
affects production and threatens jobs. “In week, Rwanda Development Board (RDB) issue but as politically motivated”.
case industries reduce or stop production chief executive officer Clare Akamanzi said “Africans should be proud of the decision
and people lose jobs, government will the decision on who benefits from AGOA is taken by Rwanda,” Amir said, “No country
decide on some kind of intervention to sup- up to the United States government. in the World should accept secondhand
port them,” he said. Akamanzi said some companies had clothes at the expense of its own industries.”
The minister said that the $1.5 million already sent product samples to some Euro- Africa CEO Forum organizes the annual
from apparel and footwear exports record- pean countries as the suspension loomed meeting of African CEOs and business lead-
ed last year under AGOA was by the eight over the past many months. ers to find ways to address the continent’s
companies that have been exporting apparel “We have a plan to maintain people who challenge and enable it to achieve target
and footwear products to the US market, have been employed in those companies; I development and economic prosperity.
President Kagame , the Prime Minister and the speakers of senate and Parliament pose for a group photo with the recently appointed judges
I
justice as grieved parties now have more which will reduce the case backlog and the
mpartiality, independence and ensuring options to seek justice. delays in the delivery of justice,” Mutabazi
that citizens get justice in shortest time explained.
possible should be cardinal principles Experts, Rwandans welcome the court Mutabazi noted that before the establish-
of the judicial system, President Paul Prof. Dennis Bikesha, the dean of School ment of the court, it would take up to three
Kagame has said. of Law at University of Rwanda, said these years for the litigants to get the final court
President Kagame made the comments reforms in the judicial system are crucial to decision from the Supreme Court. Cases,
while presiding over the swearing in cer- “stardandise the country’s judiciary”. he explained, will now be heard quickly in
emony of 17 judges of the newly established He said in other countries, the Supreme reasonable time instead of taking years.
Court of Appeal. Court not only handle cases but also gives Mutabazi also added that due to the
“This is what Rwandans want our justice legal advice to the government, explain- single judge system that will be applied in
system to be defined by because we know ing that the top court will be counseling the Court of Appeal, cases are going to be
that justice delayed is justice denied,” Presi- the state on the matters concerning the handled faster compared to the duo system
dent Kagame said. The establishment of this judiciary.“As a result, the Court of Appeal in the Supreme Court, which has also been
court means that some of the cases from the will now handle most of the cases that used blamed for delays.
High court, which were hitherto handled to go to the last appellant court,” Prof. Bike- As recently as 2012, it would take at least
by the Supreme Court, will now instead go sha told The Independent. 66 months for a case to start being heard
to the Court of Appeal. Among the cases to Tom Mulisa, the executive director of in the Supreme Court. This has since been
proceed to Supreme Court are those related the Great Lakes Initiative for Development, reduced to 20 months. In the High Court,
to the applications of review on grounds of a non-profit organization that advocates this has dropped from 11 months to three
injustices, those involving high ranking offi- for the rights of the unprivileged, said the months in the same period, as a result of
cials, jurisdiction on the constitutionality of Court of Appeal, will help justice to be judicial reforms.
the organic laws and decisions on the prec- served faster compared to previous period. According to this year’s judicial report,
edents and presidential election petitions. Silver Mugabo, a resident in Kigali, said 1,038 cases were pending at the beginning
The new judges sworn in by the Presi- the judicial restructuring is for the “better of April 2018 but these had dropped to only
dent include; Cyanzayire Aloysie, the presi- functioning and delivery of justice to the 723 by the end of April.
dent of the Supreme Court, Aimé Karimun- Rwandan people”. Mugabo also added that Berth Murora, the secretary general of
da Muyoboke, the president of the Court of the court will ease the load that remained the Supreme Court told The Independent
Appeal, Xavier Ndahayo, president of the pending in the Supreme Court for a long that although gaps remain, the court
High Court, who is deputized by Berna- time. has put in place mechanisms that have
dette Kanzayire, and Angeline Rutazana, Harrison Mutabazi, the spokesperson helped reduce the backlog. The Court of
the vice-president of the Commercial High of the judiciary, explained that the judicial Appeal, she said will also help. Murora
Court. reform is aimed at impacting efficiency of also explained that while the restructuring
The establishment of the new appellant the judicial services in Rwanda. of courts started in the primary courts
court has created optimism and enthusiasm “Before cases would begin from inter- this year, the establishment of the court of
among the public, civil society and experts. mediate courts to Supreme Court, but appeal was the major target.
U
wizeyimana, a resident of Remera
in Gasabo district is a regular user
of commuter buses that ply differ-
ent roads in Kigali city. But many
times, he says, he has had to wait in long
queues for up to 30 minutes at both Remera
and city bus terminals for buses. Because
of this, he often delays to attend to critical
business commitments.
Long queues for buses are a common
sight mostly during the rush hours in the
mornings and evenings. Yet when the
City of Kigali and Rwanda Utility Regula-
tory Agency (RURA) awarded three firms
contracts to companies to operate public
transport in Kigali in August, 2013, that was
expected to put an end to the long queues.
Kigali Bus Services (KBS), Rwanda Feder- Murenzi explains that the delay to transi- Government scrapped taxes on importa-
ation of Transport Cooperatives (RFTC) and tion to the bigger buses was partly occa- tion of big buses in the current budget and
Royal Express, which won the contracts, sioned by the nature of the contracts. now importation of buses with capacity to
were supposed to operate from 5am to “The contracts are very short,” Murenzi transport 50 passengers and above is zero-
11pm, migrate from of 18-seater to 30-seater says, “Five years is a short time. Remember rated compared to 25% previously. How-
omnibuses, and then to big buses of 50-seat- we acquire buses through bank loans and ever, those importing 25-seater buses will
ers for mass transport. This was was envis- we do not get tax breaks from government pay 10% tax down from 25%.
aged to reduce waiting time at bus stops to like other investments do. Today, one city
around five minutes. bus costs around 100 million. This is not a What sector regulator says about
The move was also expected to eliminate small capital outlay.” delays
the tendency of abandoning passengers The other challenge, he said, is deprecia-
before reaching their destination and ease tion of the franc against dollar. “In 2013, one According to Aaron Ndagijimana, the
on the traffic jam as it would deal away dollar was equivalent to Rwf520, today it’s deputy of transport at sector regulator,
with smaller omnibuses, which were about at 881, almost 63% increase,” Murenzi says. Rwanda Regulatory Agency, the wait-
800. One city bus has capacity to replace Despite these challenges, there has been ing time did not reduce to the targeted
three omnibuses. significant improvement. level “because improved city transport has
Generally, according to Rwagatore Eti- When the companies got the contracts in encouraged many people to opt for public
enne, a public transport inspection officer, 2013, omnibuses of 18-seater capacity were means of transport, thus demand has sur-
the major target was to “bring efficiency in servicing 99% of the public transport system passed supply”.
the sector to improve service delivery”. But in Kigali. Today, 70% of Murenzi’s com- Ndagijimana explains that the number of
users like Uwizeyimana are yet to see this pany buses are high capacity. passengers doubled compared to when the
efficiency. Sources say, RFTC, the biggest operator system was rollout five years ago. “Today,
has only 20 big buses, brought in some- we are talking of 420,000 passengers
Public transport operators speak out time late year and around January 2018. against 200,000 in 2013. We want to balance
Part of the reason is that migration from RFTC plies routes like Zindiro-Kimironko- demand and supply,” he says.
smaller to bigger buses delayed. Some bus Nyabugogo; Kimironko-Kigali; Nyabugo- However, this increase in the number of
companies acquired bigger buses only a few go-Kinamba-Gisozi; Nyabugogo-Batsinda, commuters could also be attributed to the
months ago, when their contracts are about andRemera-Kigali, among others, and intra growth in Rwanda’s and the city’s popu-
to expire. In 2013, only KBS owned big city roads. lation. Government figures indicate that
buses. Deo Muvunyi, the manager of Kigali Bus population size of Rwanda was 10.5 million
“There are two problems,” says Nilla Services, says some of the challenges are in 2012, just a year before the reforms, and is
Murenzi, the manager of Royal Express, infrastructural in nature. For instance, he presently at over 12.08 million.
“One, buses are not enough currently. explains that the promised special lanes for Ndagijimana adds that there are ongo-
The other issue is that the City of Kigali buses are not yet established, adding that ing discussions to find ways of addressing
is upgrading most of the main city roads, this will take time “because expansion of increasing traffic jam in Kigali, including
which causes diversion of traffic and, in roads is a costly and takes time”. expansion of city roads that is ongoing and
turn, leads to delays with passengers wait- He also remains positive noting that discouraging use of private cars.
ing for long periods at bus stops. But this is apart from these challenges, there have been
temporary.” improvements.
T
from all corners of the globe; including from the paper money payments during
he annual Rwanda Asia, Africa, and Europe. They are from the previously expos.
International Trade Fair (RITF) different sectors; including construction Sharon Munyana, the marketing
organised by the Private Sector and housing, ICT, textile, tourism, manager at AC Group, the tech start-
Federation (PSF) and the mining, banking, agriculture and agro- up behind the Tap & Go payment
Ministry of Trade and Industry is this processing, and general trade among innovation, says the move supports
year bigger and better. others. government push to use digital facilities
The event organized at Gikondo Countries like The Gambia, Benin, in all sectors of the economy.
Showgrounds in Kicukiro district Burkina Faso, Ivory Coast, Japan, Nepal, Munyana says that of the 1.8 million
will this year run for three weeks Togo and the Republic of Congo are people with Tap & Go cards, 80% are
compared to two weeks previously participating for the first time. potential expo-goers.
from July 26 to August 15. “This was the biggest motivation for
The event has also grown from Cashless entrance fees us to venture into this cashless mode of
80 exhibitors in 1997 to over 400 Meanwhile, the trade fair has gone paying expo entrance charges. So, we
companies this year. digital with visitor paying entrance fees approached PSF and they agreed to
Tea pickers at worker. The beverage raked in Rwf77.4 billion last fiscal year
R
on the regional and global markets. casual and permanent staff.
wanda’s tea exports rose by 15 NAEB projects the country to fetch
percent during the financial $92 million from a targeted 21,000 Support Tour du Rwanda
year 2017/18 on the back of metric tonnes of tea exports this fiscal Rwanda Tea is in one of the sponsors
increased Rwanda Tea brand year. Different teas are marketed under of the annual Tour du Rwanda. Amb.
promotional activities, experts say. the Rwanda Tea brand, including black George William Kayonga, the chief
Issa Nkurunziza, the head of the tea tea, green tea, organic tea and spicy executive officer of NAEB, says that the
division at the National Agriculture teas. event provides NAEB an opportunity
Export Board (NAEB), says that The NAEB official said that 75% of to further market Rwanda’s tea to the
Rwanda exported 27,824 tonnes of Rwanda’s tea was sold through the world.
tea during the period under review Mombasa auction, while between “We are happy to work with
generating $88 million (about Rwf77.4 23 and 24% was bought through Rwanda Cycling Federation in making
billion), which is higher than what was the export body’s memoranda of this competition. Apart from the
recorded in the previous year. understanding with buyers, and only competition, the participants will be
The country shipped out 25,128 1.7% was sold on the local market. able to enjoy Rwanda’s breathtaking
tonnes of tea during fiscal year 2016/17. The beverage is sold to 48 countries greenery, including tea plantations and
Nkurunziza said that the decision to in Asia, the Middle East, Africa, and other tourism attractions,” Kayonga
market the tea under the “Rwanda Tea, Europe, according to NAEB. The tea said on August 4. The brand will
a Natural Reawakening” brand that sector supports 42,840 farmers in reward the best combative rider at
was launched last year was now paying Rwanda’s 12 districts, mainly from each stage. The international cycling
dividends. He noted that the campaign Northern, Western and Southern competition started on August 5 and
has boosted the interest and attracted provinces, and whole tea value chain ends this weekend.
MTN Airtel
Challengers
Africell UTL
Struggling
Smile Smart
Quitting
Vodafone
Telecom market race
A
By Julius Businge operations in 2012, has been closed twice The Independent sought views from sector
over tax arrears. On July 27, it announced experts, telecom firms and the regulator in
few days to implementation that it had signed a brand endorsement regards to the current fears and likely future
of the social media tax, agreement with Airtel where its customers performance of the sector.
commonly referred to as would access all telecommunication “In the beginning the market was all
Over the Top (OTT) on July services on the latter’s infrastructure. about voice, it is now data and next it might
1, 2018, three largest telecom The government owned Uganda be services,” Badru Ntege, the chairman
firms – MTN, Airtel and Telecom, and one of the oldest telecom of the East African Science and Technology
Africell issued a joint statement to guide its firms in the country, is currently under Commission (EASTECO) and group
customers on how to pay the levy. statutory management and continues to CEO of consultancy firm, NFT told The
The statement that carried a rather plain grapple with old infrastructure and the Independent on August 07.
message signaled the changing dynamics hustle of finding investors to jump-start its He said the legal regime that allows
in the telecom sector that had seven telecom operations. incumbents (Airtel and MTN) to rent out
firms prior to the recent exit of Vodafone This scenario has reignited debate on the infrastructure only enriches big players and
Uganda Limited. future of the sector that seems to be slowly makes small ones uncompetitive.
Vodafone, which started operations in going into the hands of a few players. This, But Godfrey Mutabazi, the executive
2015, was the latest entrant in the country’s to some analysts could easily see these director at Uganda Communications
telecom industry. players form cartels – which would distort Commission (UCC), the regulator, told The
Other telecoms such as Smart, owned the market. Independent on August 06 that failure of
by the Aga Khan Development Network In addition, the disappearance of small small players in the market is not because of
(AKDN) and Smile Telecom, were forced to players means less competition in the the current legal regime.
drop voice service to concentrate on data. sector, loss of jobs and related economic First, he said, Uganda is a liberalised
K2 on the other hand, which launched opportunities. economy and that the sector has terms and
U
shares. This is intended to raise at least malarials, anti-retrovirals and Hepatitis B
ganda’s Securities Exchange mar- Shs100bn. drugs for the Sub-Saharan African region
ket is set to experience some new Renowned market firms, Renaissance including Uganda, Kenya, Rwanda, Tan-
buzz again as the long awaited Capital is acting as the lead transaction zania, Namibia, Ivory Coast, Zambia, Zim-
drug maker, Cipla Quality Chemi- advisor and book runner while Crested babwe, Malawi, Namibia, Mozambique,
cal Industries Ltd (CiplaQCIL), makes an Capital (Uganda) is the lead sponsoring Ghana, Ethiopia, Angola and South Sudan.
Initial Public Offering this month. stockbroker to the listing. Although the company has the capacity
The company said in a statement dated This announcement comes amidst several to produce 70 million tablets a month, it
August 02 that its shareholders will be calls amongst market experts to have a care- is currently producing at 65-70 % capacity
reducing their stake as part of its growth fully designed campaign aimed at sensitis- due to instability in demand for drugs.
strategy. ing members of the public and investors on The Capital Markets Authority
“Each of the shareholders will be selling a the benefits of listing and trading in com- remained tight lipped on the planned
minority of their stakes to enable sufficient pany shares. offer. USE Chief Executive Officer, Paul
free float and liquidity; Cipla Group, rep- It also comes at a time, the capital markets Bwiso, said, “ It is an exciting time for the
resented through a subsidiary will retain a regulator, the Capital Markets Authority Uganda capital markets that last had an
majority stake,” the notice reads in part. (CMA) current master plan is recommend- IPO in 2012 and we applaud CiplaQCIL
The Indian-based Cipla Group owns a ing to government to engage companies in for taking this important step in its
62.3% stake in the company, followed up key sectors of oil and gas and services sector growth story.”
with Capital Works Investment Partners to list so that Ugandans can have a share Joseph Kibuuka, the head of invest-
and TLG Capital that owns 14.4% and in them in addition to having them attract ment banking at Crested Capital told The
12.5% respectively. high profile investors from oversees to ben- Independenton Aug. 04 that he was not
Local investors – Emmanuel Katongole, efit the entire economy. authorised to talk about the transaction
Frederick Mutebi Kitaka and George Bagu- Currently the USE has 16 local and but said, any new listing that is of good
ma – hold a stake of 3.6% each. regional companies listed and few counters quality on the USE is a good addition and
The company said the listing has received record market activity during days of trad- would boost market activity.
relevant approvals required and that it ing. Power distributor Umeme Limited was He said there is need to run aggressive
plans to provide further details soon. the last to list on the market in 2012. sensitisation campaigns with messages on
Though the company could not divulge the benefits of investors taking part in the
further details of the planned IPO, available Company strength capital markets.
information indicates that it plans to sell
By Isaac Khisa
industrial parks
ness parks over the last five years
due to non-compliance to the set terms.
The Authority’s Acting Executive Direc-
tor, Basil Ajer, told The Independent in an
interview that the firms lost land for failure
to develop it within the stipulated period. Limited access to water, electricity & roads
“Once a firm has been allocated land in
these industrial parks, there are activities remains a big headache in these parks
that need to be carried out to show that it is
committed to its goal,” he said.
“The firm, for instance, should ensure statistics indicating a more than 50% drop The sectors that registered the lowest
that the allocated land has been surveyed, in licenced projects from 512 during the FY number licenced firm included wholesale &
plans approved, perimeter walls construct- 2016/17 to 247 during the FY 2017/18 citing a retail, catering & accommodation services
ed, among others within 18 months. How- drop in Foreign Direct Investments (FDIs). that recorded only 1% followed up with
ever, this was not fulfilled.” “The way we attract FDI is not unique transport, storage and communication as
This comes exactly a year since the UIA from other countries, so if the global trends well as community and social services that
gave firms that had been allocated land in are declining, you expect that we shall each registered 3%.
the seven industrial parks up to 18 months attract less,” said Joseph Kiggundu, a con- Central region registered the highest
to prove that they are able to develop the sultant director at UIA. number of licenced projects (201), repre-
land or lose it. Available data shows that FDI inflow senting 81% of all the licenced projects in
Last year, the investment promotion declined from US$1.05million during 2017/18. This has been driven by economic
agency revealed that it withdrew 52.4 acres 2016/17 to US$701.76million last financial infrastructure, financial services, markets
from 10 companies that failed to utilise their year. and skilled manpower which are abundant
land in the Kampala Business and Indus- In terms of sector contribution, manufac- in this region.
trial Park, Namanve, while 7.5 acres were turing sector registered the biggest number Eastern and western region accounted for
repossessed from four companies in Soroti of licenced projects (125), accounting for 8.1% each for the licenced firms and the rest
Industrial & Business Park. over 50% of all the licenced projects in taken up by the northern region.
The government has in the past years 20217/18.
courted prospective local and foreign inves- This is attributed to the government pol- Current status
tors especially those in manufacturing to set icy towards value addition and incentives A total of 33 industries are currently in
up investments in the industrial parks. such as free land in Namamve and the 10 operation within the Namamve, directly
However, a section of investors have been year tax holiday for foreign investors. employing 15,000 Ugandans within the
reluctant to set investments there citing This was followed up closely with the park, 87 commenced constructions whereas
limited extension of utilities such as water, mining and quarrying sector that accounted 121 companies are still in the pre-stage –
roads and electricity. for 11%, finance, insurance, real estate and surveying, processing of title deed plans
Meanwhile, UIA released a report on the business services at 9.3% and electricity, gas and titles, environmental impact assessment
level of investment licenced last year, with and water account for 9.1% among others.
Kampala’s surging
residential properties
Supply of residential apartments in prime suburbs
could exceed demand over the next two years
By Isaac Khisa
D
emand for residential proper-
ties in the secondary residential
surburbs of greater Kampala has
risen up on the back of increased
stock of newly constructed properties, ac-
cording to the latest report from the prop-
erty agency Knight Frank.
The 2018 half year report shows that
demand for residential properties in the
middle income segment grew by 9% par-
ticularly in Kira, Najjera, Kyanja, Namu-
gongo and Naalya, with selling and renting
ranging between Shs 100m –Shs 200 million
and Shs 400,000 – Shs 800,000 per month,
respectively. National Housing Estate Naalya
“It is also interesting to note that approxi-
mately 80% of the new stocks of apartment Demand for Grade B office space decline Street), Ntinda, Nakawa and Luzira Indus-
blocks available on the market in these areas trial areas continue to remain the preferred
were sold during first half of 2018,” the On the contrary, the Grade B office space locations for businesses requiring only stor-
report says. saw occupancy rates register a 7% decline age space as opposed to the manufacturers
There has also been a 5% year on year during the first half of 2018 from 85% who have built owner occupied premises in
increase in occupancy rates for prime recorded during the same period last year the Namanve Industrial Park and or along
residential suburbs of Nakasero, Kololo, as organisations moved either to acquire Jinja Road.
Naguru and Bugolobi owed to the comple- their properties or relocated to properties Likewise, a lot of the older stock in the
tion of pipeline projects in the second half of with better facilities and amenities. This traditional industrial areas is being exten-
last year. has in turn increased the available stock of sively renovated or plots redeveloped with
The government, which has brought Grade B space by approximately 2%. more modern storage and showroom space.
forward its final investment decision for a This increase in space of older Grade
planned 1,445-kilometer (900-mile) crude- B stock coupled with the relatively low This trend, the report says, will continue
export pipeline to the end of 2018 from June demand for the same has put further down- as owner occupiers relocate to designated
2019 via Tanzania, plans to start oil produc- ward pressure on rental rates with Grade industrial and business parks.
tion by 2022, two years later than earlier B gross rents (excl. taxes) now ranging This new development comes at the time
planned. between $10 and $12 per square metre per the Bank of Uganda projects the economic
The report, however, projects that the month depending on the specific attributes growth prospects to remain favourable in
supply of residential apartments in the of the properties. the medium term buoyed by multiplier
prime suburbs of Nakasero, Kololo, Naguru On the same note, the industrial property effects of public infrastructure investments,
and Bugolobi could exceed demand over sector saw a 10% decline in demand for higher agricultural productivity, increase
the next two years, if the existing and newly space over the first half of 2018 in line with in household consumption and the overall
completed developments are not absorbed the slowdown in economic activity with strengthening of the global economy.
by the market over the next six months. some corporate organizations and private
This was the same trend with the office businesses downsizing, and opting to take Outlook
space. Demand for office space registered up smaller and or cheaper warehouses. Looking forward, the supply gap in
an increase, with Grade A office space over There was also a lot of speculative devel- Grade A office space is expected to narrow
the last 12 months currently standing at 92% opment of logistics and storage facilities on in the medium term given the number of
compared with Grade B space at 78%. the back of anticipated demand from the oil good quality developments in the pipeline.
This is a 2% year on year growth in Grade and gas sector, but this has not quite mate- “Approximately 65,000 m2 of Grade A
A occupancy compared to the 90% recorded rialized. space is expected on the market during the
in the first half of 2017. “It is also likely that the oil and gas sector next 12 months with at least 50% of it being
Currently, there’s limited vacant Grade will have specific needs and requirements built for owner occupation by organizations
A space in the core Central Business District which will have to be built to suit and not such as law firms and government parastat-
and secondary office locations available on speculative,” the report states. als,” the report says.
the market. The traditional industrial areas, (1st – 8th
P
festival set an-African lender, United Bank for “We have also made progress with the agent
O
Africa, plans to roll out its mobile bank- banking and you should be able to see some-
n August19, Ugandans ing platform mid this month and later thing at the end of this month.”
will enjoy their favorite agent banking to facilitate customer Launching the two services will put the Nige-
rolex combos at the convenience and also tap into the unbanked rian-owned lender into the par with a section
Coca-Cola sponsored population. of other banks such as Stanbic, Centenary and
2018 Kampala Rolex Festival. UBA Uganda CEO Johnson Agoreyo said the Equity that have already done so.
The festival that returns for a lender is in the final stage of preparations and The Nigerian-owned lender, which entered
third edition will be held at The testing of the system.“We are currently testing Uganda’s banking industry in 2008, made its
Uganda Museum. The festive is the platform and should be up running in the first profit in 2016. The bank recorded a modest
organised by the Rolex initiative mid of this month,” he told media at the Kam- net profit of Shs 2.6bn in 2016 from a net loss of
together with the Ministry of pala Serena Hotel on August 02. Shs 4.3bn in 2015.
Tourism, Wildlife and Antiquities.
Coca-cola officials said the fes-
tival is just another example of the Banking
company’s continued association
with food and celebrated rituals Equity Bank enters Church House
among families and friends.
In addition, the festival pres- investment through rent reconciliations with
ents a unique opportunity for Church of Uganda, the owner of the building.
Ugandan rolex vendors, consum- Church House is a 16 floor commercial office
ers, corporate companies and building, with Equity Bank occupying three
tourism institutions to celebrate floors.
Uganda’s most popular dining “This magnificent building is testimony of
offering and the country’s rich what partnerships can give you,” said James
and original food culture. Coca- Mwangi, the Equity Group managing director
Cola Uganda’s Brand Manager, and CEO. The Deputy Governor, Bank of Ugan-
Equity Bank officials welcome Deputy da, Louis Kasekende applauded Equity and
Miriam Limo said: “This festival Governor for BoU, Louis Kasekende
E
gives us an opportunity to offer Church of Uganda for putting up the building.
our consumers a unique moment quity Bank has relocated its Uganda’s He said that Equity bank’s business model
and experience of pairing their head office from Katwe to Church House of reaching out to large and small consumers
favorite rolex with an ice cold in the middle of city on Kampala Road through digital tools and agency banking was
Coca-Cola. We remain committed as it marked 10 years of operation. critical in promoting financial inclusion and eco-
to offering our consumers unique The bank funded the construction of the nomic development.
moments and experiences.” Shs US$17million building and will recoup its
Banking Economy
NC Bank launches its first Visa Debit Card PMI signals successive
Ntulume said. improvement in
He said that the Bank
is constantly looking business conditions
T
for solutions to ease the
he private sector has continued to register
financial services offer-
positive growth with the eighteenth suc-
ing and would go ahead
cessive improvement in business condi-
to roll out more new
tions, according to the latest Purchase
products.
Managers Index (PMI) for July.
He said a cashless
At 53.2 recorded in July, the private sector
economy and techno- maintained the same positive performance score
logical advancement is card that was seen in June and remaining above
encouraging more peo- the average since the survey began in June 2016.
ple to seek easier ways The latest statistics released for July, shows that
of making payments there was a further improvement in business con-
Officials addressing the media during the Visa launch without necessarily vis- ditions across the Ugandan private sector, reflect-
N
iting banking halls. ing ongoing expansions in output and new orders
C Bank Ugan- would give its custom- Cardholders will amid reports of strengthening client demand.
da Limited ers convenient payment be able to access Commenting on July’s survey findings, Jibran
has launched solutions across the money internationally Qureishi, Regional Economist E.A at Stanbic Bank
its first Visa world. with comprehensive said: “Private sector activity remains solid and
enabled debit Card. “This launch marks security and protection could further benefit from the ongoing public
Launching the product an important step for with its chip and pin investment in infrastructure, in addition to the
on Aug.02, the Bank’s NC Bank as we continue technology. Customers government’s intentions to clear domestic arrears
Managing Director, Sam to offer our customers a can also shop online over the course of the fiscal year.”
Ntulume, said the card wide-range of services,” using the card.
Over-regulation could
suppress bancassurance
By Independent staff maximum of four insurance
O
firms to sale insurance. Also, an
ver regulation of ban- individual willing to sale insur-
cassurance services in ance should possess a certificate
Uganda could limit of proficiency.
the growth of the Bernard Obel, the acting
segment, according to the Julia director, supervision at the
Shisia, executive director at the Insurance Regulatory Authority
Nairobi-based Stanbic Insur- of Uganda, said the regulator
ance Agency. will look into the guidelines
Shisia, who spoke during the and possibly made possible
fourth Bancassurance Annual readjustments.
Forum in Kampala on August IRA-U started issuing ban-
Badru Kiggundu, the presidential adviser on infrastructure touring Roofing 01 said designing relaxed regu- cassurance licences to com-
Rolling Mills Namanve during the engineers visit of the plant on August 4. lation like it is in Kenya could mercial banks in December
Kiggundu urged the engineers in uganda to be innovative in order to address stir growth of the bancassur- last year, a step intended to
challenges of steel production in Uganda. INDEPENDENT/JIMMY SIYA ance. boost the country’s insurance
“For instance, in Kenya, a penetration now lowest in East
bank is allowed to be an agent Africa.
for all insurance firms and it is So far, 12 banks have been
not necessarily that one must given ago a head to be insur-
have a certificate of proficient to ance agents. Uganda’s insur-
sale insurance,” he said. ance penetration currently
“I believe Uganda’s insur- stands at 0.73 % compared with
ance industry can also use this Rwanda’s 1% and Tanzania’s
model to grow the bancassur- 2.3%. Kenya has the highest
ance business segment.” insurance penetration at 3.4%.
She said insurance firms can Last year, the country’s
carry out advertisement for a insurance industry recorded
specific bancassurance product a 16% growth in insurance
jointly with a commercial bank. premiums to Shs737billion last
This came in the wake that year, up from Shs 634billion
commercial banks in Uganda 2016. In 2015 premiums were
are allowed to partner with a recorded at Shs611billion.
Cissy Kagaba, the Executive Director of the Anti- Corruption Coalition
Uganda (ACCU) addressing the press after the release of a validation
report on environmental audit on forestry activities in Uganda for 2017
at Hotel Africana on August 2. INDEPENDENT/JIMMY SIYA
F
it. The patient will need other treatment
interventions like wound care, pain relief or the first time, Pallisa Planning Working Group that
and, in extreme cases, reconstructive surgery district Local Government advocated for FP programing at
because snake venom tends to kill tissue gave a budget-line to Family district level. Coalition for Health
cells whose only treatment is amputating the Planning (FP) services in its Promotion and Social Development
limb. annual budget for Financial Year (FY) (HEPS-Uganda) a local NGO
There is another catch. The South African 2018/2019 to reduce the high teenage supported the district using a
brand, which is the best on the market now pregnancies and FP unmet need. This modeling tool known as ImpactNow
requires a cold chain and yet the Indian one is commendable and the rest of the to predict the short-term future
that doesn’t require that is not effective. districts should borrow a leaf if we, health and economic benefits of FP
Royjan who is also a self -taught snake as a country are to make strides in investments on behalf of the district.
handler and snake farm owner, says that improving FP services, achieving the Through the evidence gathered, the
while some countries have monovalent FP2020 global targets and Uganda’s district leaders were convinced that
antivenoms which treat bites by a specific Development Vision 2040. the status quo of the district is bound
snake species, the best option for countries Pallisa has a Total Fertility Rate (TFR) to change if investment is made in FP
like Uganda who have many cases, many estimated at six to seven children services.
snake species and yet with little resources, per woman (6.1), teenage pregnancy Ensuring universal access to Family
are the polyvalent antivenoms, which treat a at 25%, contraceptive prevalence at Planning (FP) has been identified
number of snake bites with one antivenom 31%; FP use among married women as a key priority for realizing the
cocktail. at 20%; and the unmet need at 69%. Sustainable Development Goals
“Research is also ongoing to develop This is mainly attributed to persistent (SDGs), achieving the FP2020 global
rapid diagnostic tests to quickly identify stock-outs of FP commodities as targets and Uganda’s Development
which kind of snake attacked someone,” well as lack of information on sexual Vision 2040. High quality FP
Royjan told The Independent, “When this is reproductive health among the young can help curb rapid population
done, we will be able to provide treatments population in Pallisa district (Lot growth, improve health, and drive
quickly.” Quality Assurance Sample - LQS development.
But before this happens, Royjan says, results - 2015). At national level, Uganda recognizes universal access
people should try as much as possible to contraceptive prevalence (modern to Family Planning (FP) as a key
avoid bites first by changing their attitude of and traditional methods) stands at priority for realizing the Sustainable
attacking and killing snakes when they spot 39% among married women, and at Development Goals (SDGs), achieving
them. He explains that most of the snakes, 51% among sexually active unmarried the FP2020 global targets and
even the most poisonous, are peaceful and women; the unmet need for FP stands Uganda’s Development Vision 2040
will not strike unless disturbed. at 28%; while the TFR is as high as and as a country, Uganda has made
He recommends that people move away 5.4 (Uganda Bureau of Statistics significant progress towards fulfilling
once they spot a snake. If it spits venom in (UBOS), 2017 and Uganda Health and the right to family planning, funding
one’s eyes, he adds, they should be rinsed Demographic Survey 2016). is however, not enough. There is need
immediately with water. He says once bitten, Nearly one in five married women for a concerted approach such as that
all tight items on one’s body should be and two in four teenagers do not of Pallisa to improve quality of family
removed and the wounded area left alone. want to have a child but are not planning services.
Then, he adds, the patient should also be using contraceptives. It is against this If all districts are committed to
made to lay on the ground with the side that background that the district allocated increase funding to family planning just
has not been beaten to limit movement of the 2% of the Primary Health Care (PHC) like Pallisa has done, the country will
affected area. funding to FP programming. This move faster in achieving the FP2020
He warns against lying on the back or use means that all health facilities will goals and Vision 2020 set targets.
of traditional unsafe treatments. Protecting have FP activities in the FY2018/19.
against bites according to him is as simple as The district further allocated UGX10
closing all holes in one’s house, cutting grass million to training of health workers
around the house and watching your steps on FP service provision.
when in the bush. This was achieved through the
For Sophie, however, with comprehensive development of an Extended Family
sensitization, most of the injuries caused by
snakes will be avoided.
I
n an article published under the title, and assistance of the Army Commander, important issue, however, and the core of
“Court has sanctioned the return of Brigadier Opolot, wanted to remove this precedent, turned out to be the question
violent Constitutionalism in Uganda”, him from power and that plans to this end of the 1966 Constitution’s validity.
Professor Onyango-Oloka asserts that were in an advanced stage by the end of No doubt, the roots of this 1966 Constitu-
the Constitutional Court that sat in Mbale 1965. No one, let alone Ibingira and his sup- tion lay in an extra-constitutional act to wit,
has “…plunged our constitutional regime porters, has denied that they wanted to see revolution carried out by Apollo Milton
into a quagmire not seen since the infamous Milton Obote and those who believed in Obote when he seized all powers of govern-
decision in the case of ex parte Matovu”. socialist philosophies removed. Their only ment on 22nd February 1966.
“Decided in 1966, Matovu’s decision regret is that they failed.” The Attorney General submitted that
affirmed that the judiciary would look aside Mutesa’s involvement in plots to illegally under International Law, an independent
when the Executive arm of government overthrow the government was of course and sovereign nation may have its govern-
(in that case the government of President treason for which he should have been pros- ment or Constitution changed by way of
Apollo Milton Obote) used force to change ecuted in a court of law. However, Mutesa a revolution, where an abrupt political
the Constitution”. was a sitting president of the country with change destroys a pre-existing legal order
Far from creating a constitutional quag- immunity against prosecution. To pros- and effectively replaces it in a manner that
mire, the events of 1966 actually removed ecute him would have required that he be pre-existing legal order did not itself con-
a constitutional quagmire, which had impeached. As provided for in the 1962 con- template.
obtained in the federal constitution of 1962. stitution, such a move by Parliament would It was thus argued that the suspension
The 1962 contained the following clause: have required concurrence with a vote of of the 1962 Constitution and seizure of all
“7(1) Subject to provisions of the Constitu- two thirds of the Lukiiko (parliament) of powers of government by Apollo Milton
tion of Uganda (including the provisions Buganda. Obote in February 1962 constituted a revo-
of the Constitutions of Buganda included There is no way such a move would have lution. It was put to the Court that a revolu-
therein), the Legislature of Uganda and the been approved by the Lukiiko. It was to tion had occurred in Uganda, destroying
Legislature of Buganda shall have concur- resolve this constitutional contradiction the legal order underlying the 1962 Consti-
rent power to make laws for the peace.” that Binaisa, the Attorney General at the tution and establishing the new legal order
This provision became a serious quag- time advised Obote, the Prime Minister to under which the 1966 Constitution was
mire when it came to dealing with Edward abrogate the 1962 Constitution and replace validly established.
Mutesa’s transgression. it with the 1966 Constitution. Further reliance was sought from the Paki-
Mutesa once went to ‘the lost counties’ The validity of this constitution was to stani Supreme Court decision of the state v.
and shot people dead. Professor Mamdani become an issue when on September 6, Dosso where the Kelsen theory was applied
tells us on page 244 of his book, “Politics 1966, Michael Matovu filed, through his in a similar circumstance. In the case of
and Class formation” that: “One Sunday, advocate, a writ of habeas corpus under Pakistan, the declaration of martial law by
Edward Mutesa went on an expedition to Section 349 of the Criminal Procedure Code President Iskandar Mirza on October 7th
the lost counties with 8,000 ex-servicemen, of Uganda. 1958 abrogated the 1956 Constitution.
demonstrated his royal prerogative of being Following the revolution, there was Kelsen theorized: ″No jurist would main-
above the law by one morning shooting some resistance, and some of those who tain that even after a successful revolution
nine Banyoro peasants gathered in a market were leading the resistance got arrested. the old constitution and the laws based
place...” This was murder, which would Among those who were arrested was thereupon remain in force, on the ground
have required his trial; however, one could Michael Matovu, the Saza Chief of Buddu that they have not been nullified in a
not take him to court as long as he was head in Buganda. Matovu was detained on May manner anticipated by the old order itself.
of state. 26 1966. Every jurist will presume that the old order
Secondly, none other than Professor Matovu’s application for a writ of habeas – to which no political reality any longer
Mutibwa, himself a Muganda, a member of corpus led to various questions requiring corresponds – has ceased to be valid, and
the 1995 Constitutional Commission and a constitutional interpretation and so the that all norms, which are valid within the
retired professor of history, has written in presiding judge, Jeffreys Jones, J, referred new order, receive their validity exclusively
his book, “Uganda since Independence: a the matter to a 3-member bench of the High from the new constitution. It follows that,
story of unfulfilled hopes.” thus: Court (Udo Udoma, Chief Justice; Sheridan from this juristic point of view the norms of
«The political dispute between Obote and and Jeffreys Jones, JJ) for hearing and deter- the old order can no longer be recognized as
Ibingira and his supporters centered around mination of the Constitutional questions valid norms.″
the control of UPC and ultimately the very (not the application for the writ of habeas The three member bench (in ex parte
leadership of the country in terms of the corpus per se). Matovu) concluded that the Kelsenian prin-
political and economic ideologies that were The issues for determination revolved ciple was equally applicable in the Uganda
to be followed. Obote claimed - not without around the constitutional validity of the case and held that the 1966 Constitution
justification - that Ibingira’s group, which, emergency powers laws by which Matovu was thus valid because it was the product
included the President, Sir Edward Mutesa, was detained and therefore by extension, of a successful revolution which had led to
and the Buganda government at Mengo the constitutionality or legal validity of a new legal order, ousting that of the 1962
and which also counted on the support Michael Matovu’s detention. The most Constitution.
New Zealand, Romanian court rejects man’s claim that he is, in fact, alive
Australia After more than 20 years of was filed “too late.”The deci- in December, last year, Turkish
bicker over flag working as a cook in Turkey,
63-year-old Constantin Reliu
sion, the court said, is final. “I
am a living ghost,” said Reliu,
authorities detained him over
expired papers and in Janu-
Winston Peters, New Zea- returned home to Romania “I am officially dead, although ary deported him to Romania.
land’s acting prime minister has to discover that his wife had I’m alive,” he told The Associ- Upon landing at Bucharest
claimed Australia stole New had him officially registered as ated Press early this year, “I airport, he was informed by
Zealand’s design saying his dead. Apparently, she wanted have no income and because border officials that he had been
country “got there first.” New to remarry and now lives in I am listed as dead, I can’t do officially declared dead and
Zealand adopted its flag in Italy. Reliu has since been living anything.” Reliu explained underwent six hours of ques-
1902, while Australia’s was not a legalistic nightmare of try- that he first went to work in tioning and tests. Reliu said he
officially recognised until 1954. ing to prove to authorities that Turkey in 1992 and returned has been banned for life from
The two countries’ flags are both he is, in fact, alive. A court in in 1995 to the first big shock returning to Turkey but would
dark blue with the Union Jack the northeastern city of Vaslui of his marriage — his wife’s like to write to Turkish Presi-
emblem in the top left corner. refused to overturn his death infidelity. In 1999, he decided to dent Recep Tayyip Erdogan to
The only distinct variation is certificate because his request return to Turkey for good. But appeal the decision.
that the Australian version also
has six white stars, while New
Zealand’s has a constellation Bermuda Triangle mystery solved?
of four red stars. Peters who is
It appears the reason
temporarily leading the country
behind the Bermuda Tri-
while Prime Minister Jacinda
angle’s threatening pow-
Ardern is on maternity leave
ers to sailors may be down
raised the issue recently and
to the rocks underwater,
complained: “We got there first
according to a new docu-
with this design.” “We designed
mentary. The Bermuda
it and they borrowed it and if we
Triangle is a large area of
wanted to clear the matter up
they should change their flag.”
“It must be patently obvious that
East Fife 4, Forfar 5 soccer the North Atlantic Ocean
between Bermuda, Florida
tary, called “Secrets of
the Bermuda Triangle,”
all over the world people are result finally happens and Puerto-Rico. It is cov-
ered in shipping lanes as
explains why this could
confused. I’ve been in places like happen. Nick Hutchings
For many years, the late British come- large vessels cross ports in
Turkey and elsewhere where a mineral prospector
dian, Eric Morecambe (pictured), dreamt the Americas, Europe and
they’ve confused our countries suggests the geology of
of this tongue-twister of a Scottish League the Caribbean. The area
on the basis of those flags. It’s Bermuda is unusual. “Ber-
football result happening. Morecambe used also known as “Devil’s
not helpful.” But Opposition muda’s basically a sea
the tongue-twister East Fife 4, Forfar 5, as a Triangle,” has been linked
leader, Simon Bridges, mocked mountain – it’s an under-
greeting to his friend James Alexander Gor- with several unexplainable
Peters’ flag pre-occupation, water volcano,” says Nick.
don, who read the classified football results incidents over the years.
accusing him of populism and “30 million years ago, it
on BBC radio for 40 years.”Whenever I saw But the reason for these
labeling him “a poor man’s Don- was sticking up above sea
him over a 20-year period, he would say disappearances may be
ald Trump.” level. It has now eroded
‘East Fife 4, Forfar 5’. I’ve got a tape of that.” lurking beneath the sur-
away and we’re left with
And finally “East Fife 4, Forfar 5” came to face. Rocks which form the
the top of a volcano,” he
pass on July 22, this year. The teams were bed of the Atlantic Ocean
said, “We have a few core
locked at 1-1 during their match trigger- may have interfered with
samples, which have mag-
ing a penalty shootout. East 4, Forfar 5 is captains’ compasses, send-
netite in them. It’s the most
regarded by some as the perfect football ing them in the wrong
magnetic naturally occur-
scoreline due to its rhyming nature and direction. Footage from
ring material on Earth.”
rhythmic intonation when spoken out load. Channel 5’s documen-
T
only way to keep your vehicle from including being prepared by making
he rainy season is underway and hydroplaning. Also remember that one sure your car is ready and ensuring
since roads in Kampala and the rest of the most dangerous times to drive you can always see properly. But most
of the country tend to have potholes, is soon after it begins to rain, as oils importantly, you have to drive accord-
driving can be a tricky exercise. on roadway make for slick conditions. ing to the conditions, and adjust a few
Since some of the roads are not tarmacked, Waiting a few minutes, rather than of your habits to avoid sliding, skid-
it may be helpful to follow a few rules since rushing to your destination, can be a ding, or being involved in a collision.
the rainy season will go on for the next four safer plan when it is raining. Keep your windows clean and clear.
months. Turn your lights on. Turn your head- Being able to see properly is key to driv-
In addition to the potentially poor visibil- lights on to help other vehicles see ing safely any time, especially when
ity that accompanies most heavy rain, driv- you. Many states require the use of visibility is already reduced because of
ers should be ready to protect themselves headlights during rain, even in broad rain. To improve your visibility.
against hydroplaning. Hydroplaning can daylight. Clean the inside and outside of the
occur when a vehicle is traveling too fast in Give other vehicles more space. Add windows regularly to remove dirt, dust,
heavy rain conditions, causing the vehicle’s 1-2 extra seconds of following time in mud, smoke, fingerprints, grime, and
tires to travel on a thin layer of water rather the rain, which gives you and the cars other materials. If your windows fog
than grip the surface of the road. This has behind you more time to react to traffic. up, turn on the air conditioning or cold
the potential to make steering and brak- 4 Driving in the rain can be both scary air in the car and aim the vents at the
ing difficult and could even lead to losing and dangerous, and it’s important to windows. Turn on the rear defroster,
control of your vehicle. These are additional take wet weather seriously when you’re and open the windows if necessary to
tips to help you. on the road. There are lots of things you increase the airflow.
J
in life. It was not long before she made
ustine Nabbosa is a renowned gospel headway to stay and study in Kampala
artiste, lead worshiper and co- at Lubiri Secondary School. This opened
pastor with Wilson Bugembe at the more doors to meet and interact with
Worship House. The vocal dynamo Bugembe through the school exchange
is also president of Next Girl Champion, fellowship meetings.
an annual conference that empowers girls About the same time, Bugembe spotted
and women in Uganda. the singing talent in her and when he
Many in the gospel music circles first chose to start a church in Nansana,
knew her as a back-up voice in Wilson Nabbosa was made one of the founder
Bugembe’s music. Although it remains the members.
podium that brought her to the limelight, “I started singing with pastor
her first album featuring ‘Oli Katonda’ has Bugembe on his third album of ‘Kani’
since its release in 2016, broken barriers and God used me miraculously to
and performed beyond her expectations. touch lives of many people,” she says.
“I am grateful to God for the far he has On April 24, 2017, Nabbosa held
brought me with that song,” Nabbosa her maiden solo concert and has
says, “I realized that it was time for me to since travelled the world ministering
shine and no one could stop me”. thorough music.
Featuring in Bugembe’s music made She is the middle child of the nine
her voice popular and the audience started children of James and Tappi Kiyingi
demanding for solo music but Nabbosa born and raised in Kamuli-
was not about to take on a solo career. Eastern Uganda.
Three years ago, she emerged out of She is a graduate
the comfort zone and started singing of Industrial and
compositions by different people starting Organizational Psychology
her solo music journey. The start of the from Makerere University
journey would seem tough as her first with several awards in the
songs were scarcely received not until ministry. She is yet to get
she released ‘Oli Katonda’ that shook married to the love of her life.
airwaves.
Perhaps her music journey has not been
a tough one since it is where her passion
lies but also because she never saw music
as a source of income but as a part from
ministry work.
Bugembe, who she considers to be
the right person, has always been her
stepping-stone, she says.
“Backing him made my voice popular
and came with so many favours of
recording for free so by the time I chose to
go solo,” Nabbosa says, “the industry was
ready and willing to take on my music.”
She recalls meeting Bugembe in 2002
during her Senior Four long holiday at
his first concert. She was only a visitor
at Christian Life Church in Bwaise,
while she stayed at her uncle’s home in
Kampala. On the fateful day, Nabbosa
landed on a poster announcing
Bugembe’s forthcoming concert to
which she vowed to attend even when
her pockets wouldn’t support the urge.
She would later make it inside the
performance room, which opportunity,
H
do not know about you? worshipper and keep in most like in a man?
onesty is me. I that space to my grave. We should all be What do you regard
love God and prayerful. as the lowest depth of
always trust in What is your greatest misery?
him for my next extravagance? What or who is the My heart really goes
move. I love to serve him I feel tempted all the greatest love of your life? out to women who don’t
and the best thing that time to buy more shoes I love my pastor, my have children and yet
has ever happened to me and clothes even when I parents and Jean Peace they desire to have.
is for God to let me serve may not need them.
Him. It is only God that When and where were What is your favorite
has brought me thus far. What is the greatest thing you happiest? occupation?
I am also a smiley person you have ever done? There have been so I am very passionate
who loves laughing and Releasing my solo many happy moments about ministering
being around happy music. in my life but the most through music.
people. recent one was being
What is your current given a Visa to USA. I had What do you most value
What is your idea of state of mind? tried several times and in your friends?
perfect happiness? I am happy. was denied until recently I make friends with
Honestly, I don’t know when God answered my different people for
of another thing that What do you consider the prayers. different reasons but I get
brings joy and happiness most overrated virtue? I was very happy along so fast with people
other than to serve God I think we waste more the day I met Pastor who serve God and pray
and be in His presence time talking than taking Wilson Bugembe in endlessly. I also love
always. I draw my perfect action. The leaders 2002 for the first time. I those who think big.
happiness from serving and the followers talk was also happy singing
God. too much with little or in the presence of Who are your favorite
nothing being put to thousands of Christians writers?
What is your greatest action. in 77 days of Glory and I am attracted to
fear? being appreciated with content not writers.
In the presence of God, What does being Shs5million.
I am fearless. powerful mean to you? Which historical figure
I know that money If you could change one do you most identify
What is the trait you can never bring joy and thing about yourself, with?
most deplore in yourself? power to someone but the what would it be? Billy Graham lived a
Sometimes, I peace in God does. The Sometimes I am slow test of time and left a
procrastinate. But also, ultimate happiness and to act, which not only legacy. He preached the
being a smiley and simple power comes from God. bothers other people word of God all over the
person often, its gets hard around me but also world and is one person
to prove to people around On what occasion do you myself. I would love to be who died respectfully of
me when I am serious lie? faster. old age. I want to live like
about an issue. As for my I am not a liar. him.
patience, some people Where would you most
misuse it and take me for What do you most dislike like to live? What is your greatest
granted. about your appearance? Here is home though regret?
I struggle to keep in I believe that it’s a team Not any that I can recall
What is the trait you shape because I gain a lot effort to make it better. at the moment.
most deplore in others? of weight easily.
Being too serious about What is your most How would you like to
life. Which living person do treasured possession? die?
you most despise? I love my bible; it is Naturally, in old age
Which living person do I don’t despise God’s the bible that I have read while in the middle of
you most admire? creations. over time and made service to God.
I admire Osinachi Kalu myself familiar with
Joseph aka ‘Sinach’ the What is the quality you it. It is the first thing What is your motto?
gospel music singer and most like in a woman? that I take along with Trust in the Lord
worship leader. I love Every straight me everywhere. It is with all your heart and
her passion for God and human being should be something that I can get lean not on your own
that is my dream. I aim prayerful. hold of anywhere and understating.
W
hat was at first a trade skirmish ference, beyond perhaps a slight increase in majority of Americans are not behind the
– with US President Donald transaction costs. But Trump could trumpet trade war.
Trump imposing tariffs on steel that he had eliminated the bilateral trade Public support will wane even further as
and aluminum – appears to be deficit. Americans realize that they lose doubly
quickly morphing into a full-scale trade war In fact, significantly reducing the bilateral from this war: jobs will disappear, not only
with China. trade deficit in a meaningful way will prove because of China’s retaliatory measures, but
If the truce agreed by Europe and the US difficult. As demand for Chinese goods also because US tariffs increase the price of
holds, the US will be doing battle mainly decreases, the renminbi’s exchange rate will US exports and make them less competitive;
with China, rather than the world (of weaken – even without any government and the prices of the goods they buy will
course, the trade conflict with Canada and intervention. rise.
Mexico will continue to simmer, given US This will partly offset the effect of US tar- This may force the dollar’s exchange rate
demands that neither country can or should iffs; at the same time, it will increase China’s to fall, increasing inflation in the US even
accept). competitiveness with other countries—and more – giving rise to still more opposition.
Beyond the true, but by now platitudi- this will be true even if China doesn’t use The Fed is likely then to raise interest rates,
nous, assertion that everyone will lose, what other instruments in its possession, like leading to weaker investment and growth
can we say about the possible outcomes of wage and price controls, or push strongly and more unemployment.
Trump’s trade war? First, macro-economics for productivity increases. China’s overall Trump has shown how he responds
always prevails: if the United States’ trade balance, like that of the US, is deter- when his lies are exposed or his policies are
domestic investment continues to exceed its mined by its macro-economics. failing: he doubles down. China has repeat-
savings, it will have to import capital and If China intervenes more actively and edly offered face-saving ways for Trump to
have a large trade deficit. retaliates more aggressively, the change in leave the battlefield and declare victory. But
Worse, because of the tax cuts enacted at the US-China trade balance could be even he refuses to take them up. Perhaps hope
the end of last year, the US fiscal deficit is smaller. The relative pain each will inflict can be found in three of his other traits: his
reaching new records – recently projected on the other is difficult to ascertain. China focus on appearance over substance, his
to exceed $1 trillion by 2020 – which means has more control of its economy, and has unpredictability, and his love of “big man”
that the trade deficit almost surely will wanted to shift toward a growth model politics.
increase, whatever the outcome of the trade based on domestic demand rather than Perhaps in a grand meeting with Presi-
war. The only way that won’t happen is if investment and exports. dent Xi Jinping, he can declare the problem
Trump leads the US into a recession, with The US is simply helping China do what solved, with some minor adjustments of
incomes declining so much that investment it has already been trying to do. On the tariffs here and there, and some new ges-
and imports plummet. other hand, US actions come at a time when ture toward market opening that China had
The US might sell more natural gas to China is trying to manage excess leverage already planned to announce, and everyone
China and buy fewer washing machines; and excess capacity; at least in some sectors, can go home happy.
but it will sell less natural gas to other coun- the US will make these tasks all the more In this scenario, Trump will have
tries and buy washing machines or some- difficult. “solved,” imperfectly, a problem that he
thing else from Thailand or another country This much is clear: if Trump’s objective created. But the world following his foolish
that has avoided the irascible Trump’s is to stop China from pursuing its “Made trade war will still be different: more uncer-
wrath. in China 2025” policy – adopted in 2015 tain, less confident in the international rule
But, because the US interfered with to further its 40-year goal of narrowing of law, and with harder borders.
the market, it will be paying more for its the income gap between China and the Trump has changed the world, perma-
imports and getting less for its exports than advanced countries – he will almost surely nently, for the worse. Even with the best
otherwise would have been the case. In fail. possible outcomes, the only winner is
short, the best outcome means that the US On the contrary, Trump’s actions will Trump – with his outsize ego pumped up
will be worse off than it is today. only strengthen Chinese leaders’ resolve to just a little more.
The US has a problem, but it’s not with boost innovation and achieve technological
China. It’s at home: America has been supremacy, as they realize that they can’t Joseph E. Stiglitz, a Nobel laureate in
saving too little. Trump, like so many of his rely on others, and that the US is actively economics, is University Professor at
compatriots, is immensely shortsighted. hostile. Columbia University and Chief Economist
If he had a whit of understanding of eco- If a country enters a war, trade or oth- at the Roosevelt Institute. His most recent
nomics and a long-term vision, he would erwise, it should be sure that good gen- book is Globalization and Its Discontents
have done what he could to increase erals – with clearly defined objectives, a Revisited: Anti-Globalization in the Era of
national savings. That would have reduced viable strategy, and popular support – are Trump.
the multilateral trade deficit. in charge. It is here that the differences
There are obvious quick fixes: China could between China and the US appear so great.
buy more American oil and then sell it on to No country could have a more unquali-
others. This would not make an iota of dif- fied economic team than Trump’s, and a
#NBSUpdates
|
https://www.scribd.com/document/386393819/THE-INDEPENDENT-Issue-533-pdf
|
CC-MAIN-2019-35
|
refinedweb
| 17,856
| 57.5
|
When I select an option in a radio group, the selection is focused (I can see the styled highlighting) but not selected. When the radio group is blurred (loses focus), the selected item now appears selected.
Figure: Item selected and programmatic selection successful, but radio button is not checked
Figure: Once I click out of the radio group, the selected button is checked
Hi Matt,
I tried to reproduce the described faulty behavior on our online RadioGroup demos and on a standalone local test page, which uses the sample snippet from our RadioGroup documentation. I used Win/Chrome and Mac/Safari. In all cases, the "selected" styles were applied as expected.
Since other people report the issue too, it seems there is something wrong here. Can you provide a test page that exhibits the described problem and we will review it immediately? Thanks!
We have had this problem when using Tailwind's tailwind-forms module, with Telerik for Blazor UI.
Tailwind does a reset and has a comprehensive set of attribute-based pseudo-styles by default.
Telerik's styles are overridden if Tailwind's are loaded later. In most apps, Tailwind would load after Telerik in order to allow the application to do overrides.
As a short-term fix, Tailwind forms has a class-based strategy that increases its specificity. tailwindcss-forms#using-classes-instead-of-element-selectors describes how to resolve - if you're using Tailwind.
But ideally, Telerik's styles would be slightly more specific and have safe defaults under their own class namespace, so that other styling - whether from Tailwind or something else - targeting type attributes isn't more specific than the k-checkbox selectors. This wouldn't stop applications from explicitly styling k-checkbox or radio if required, but would require them to be more specific than a basic [type] selector.
I have simulated the problem in a fiddle here:
JSFiddle - Code Playground
Test 1 fails:
Test 2 passes:
But test 2 is an unlikely use scenario I think, because typically app users will want their app CSS to come last to allow overrides.
This means we need a 3rd case:
In this case, we join the classnames with attribute selectors.
There might be a more effective way to do this just on the base class, rather than the state classes.
Hi Josh,
Thanks a lot for the detailed description of your use case and JSFiddle.
This is a tricky situation. Theoretically, we can increase the specificity of our selectors, but this can have various implications, for example -
I hope this makes sense.
|
https://www.telerik.com/forums/radio-button-not-selected-until-blur
|
CC-MAIN-2022-33
|
refinedweb
| 427
| 60.45
|
Angular support. If you are using Angular 5, consider upgrading to the newer HttpClient. You can find a tutorial for the HttpClient service in my post Angular 5: Making API calls with the HttpClient service.
In my previous article, Angular 2: HTTP, Observables, and concurrent data loading, we investigated querying data from an API endpoint using Angular 2's Http service and the Observable pattern. In this second article, we will look at using Http to save data to our API endpoint.
Consider the Angular 2 service we created in the previous article,()) ); } }
The back-end API
The next step is to handle the other HTTP verbs: POST, PUT, and DELETE. Unlike our original GET requests from Part 1, these requests require a live API backend. You can use Node, Drupal, Django, or the back-end framework of your choice to create this API. The actual API creation is out of scope for this article.
For this article, the demo app contains a simple Express API that has the following REST endpoints:.
Not all APIs return the same data and formats. Some may return a different status code, some XML data, or nothing at all. Consult the documentation for your API to determine what the response format will look like.
To communicate with the API, we add several new methods to our DemoService class:
import {Injectable} from "@angular/core"; import {Http, Response, Headers, RequestOptions} from "@angular/http"; import {Observable} from "rxjs/Rx"; @Injectable() export class DemoService { … createFood(food) { let headers = new Headers({ 'Content-Type': 'application/json' }); let options = new RequestOptions({ headers: headers }); let body = JSON.stringify(food); return this.http.post('/api/food/', body, options ).map((res: Response) => res.json()); } updateFood(food) { let headers = new Headers({ 'Content-Type': 'application/json' }); let options = new RequestOptions({ headers: headers }); let body = JSON.stringify(food); return this.http.put('/api/food/' + food_id, body, options ).map((res: Response) => res.json()); } deleteFood(food_id) { return this.http.delete('/api/food/' + food_id); } }
Notice that our createFood() and updateFood() methods use API endpoints which return the saved object in JSON form. Thus we need to use
.map((res: Response) => res.json()) to make the JSON objects easily available to the HTTP Observable's subscribers.
The DELETE method of our API returns nothing, so we don't use the
.map() method.
If we didn't do this, the subscribers would receive a Response object instead. This is more difficult to work with, and defeats our goal of abstracting the HTTP logic within the service. Components
Now that we have the service in place, we can add some basic CRUD features to our AppComponent:
... @Component({ selector: 'demo-app', template:` <h1>Angular2 HTTP Demo App</h1> <h2>Foods</h2> <ul> <li *<input type="text" name="food-name" [(ngModel)]="food.name"><button (click)="updateFood(food)">Save</button> <button (click)="deleteFood(food)">Delete</button></li> </ul> <p>Create a new food: <input type="text" name="new_food" [(ngModel)]="new_food"><button (click)="createFood(new_food)">Save</button></p> <h2>Books and Movies</h2> ... ` }) export class AppComponent { public foods; public books; public movies; public new_food; ...); } ); } } }
Note: For brevity, I omitted some code that has not changed since the previous article. See GitHub for the full source code. promises.
What about Observable.ForkJoin()?
In the previous post, we used the forkJoin() method to run multiple simultaneous GET requests. We could theoretically do the same when saving data, but it would be difficult to do this safely. If one request completes successfully while another request fails, your data could end up in a broken or partially-saved state.
You would be better off passing a single, larger data object to your back-end API, which could then wrap all the saving logic in a database transaction for better data integrity.
Happy coding!
Ajax array query params
Hi ,
There are some automated way in angular to send array query params in get method ajax like this "param[]=1¶m[]=2..." to a php endpoint?.
Regards.
acci
Tue, 11/15/2016 - 10:38
Fri, 06/09/2017 - 21:58
In reply to Errata by acci
Fabien
Thu, 12/15/2016 - 21:03
possible bug in firefox
I found also this strange bug on firefox.
So i found your article :-)
Do you have more info on it ? For the moment i put .json at the end to get the json reply.
But i see here :… the Content-type should be supported (that is not working for me....)
Randy Stimpson
Fri, 12/16/2016 - 22:37
http.delete
http.delete isn't working for me. Also I get an error when I include HTTP_PROVIDERS in my import statement. saying the is no exported member 'HTTP_PROVIDERS'
Fri, 06/09/2017 - 22:01
In reply to http.delete by Randy Stimpson
HTTP_PROVIDERS was removed
HTTP_PROVIDERS was a legacy of a pre-release version of Angular, which was later removed. I must have missed it when I updated the code samples for newer Angular versions. Fixed.
Tomas Chibai
Mon, 01/02/2017 - 07:55
Django and Angular 2
Hi, Thanks for good tutorial.
I want to know how can i join Django and angular 2 in same project?
nasir junaid
Tue, 03/14/2017 - 07:09
Demon Hunter
Wed, 03/22/2017 - 04:29
Confuise In Your Code
createFood(food) {
let headers = new Headers({ 'Content-Type': 'application/json' });
let options = new RequestOptions({ headers: headers });
let body = JSON.stringify(food);
return this.http.post('/api/food/', body, headers).map((res: Response) => res.json());
}
Why you create options but not using?
Fri, 06/09/2017 - 22:04
In reply to Confuise In Your Code by Demon Hunter
Rohan
Tue, 04/04/2017 - 08:52
Copying JSON
Hi,
If i am getting a JSON String data from an api call. how can i use it inside an instance of the Typescript class
Fri, 06/09/2017 - 22:21
In reply to Copying JSON by Rohan
When you're using Angular 2…
When you're using Angular 2/4 and Observables, the data you receive from an API call should be automatically converted to a JSON object, rather than JSON string data.
Once you have the JSON object, you can use it in your Component class or display in a template, like the examples shown here:-
Dharmesh Bokadiya
Fri, 06/29/2018 - 05:59
How to send angular4 form data to server explain in brief
i got helpful information thank you.
explain in brief how to send angular4 form click on submit button to component and component to service and service to server.
Gopala Raja Naika
Sun, 07/29/2018 - 09:16
Thanks for the best article
Hi Dechant, I was struggle to send data through post method, I found the great solution from ur article, Thanks lot
shikha
Thu, 03/14/2019 - 12:04
i am not apple to post data on server
onSubmit(newdata) {
// console.log(newdata);
this.http.post(this.url + '/vender', JSON.stringify(newdata), this.HttpUploadOptions).subscribe(abc=> {
console.log(abc);
});
}
Pablo
Thu, 10/06/2016 - 16:51
|
https://www.metaltoad.com/blog/angular-2-using-http-service-write-data-api
|
CC-MAIN-2020-16
|
refinedweb
| 1,161
| 56.05
|
The changed in a string.
Example
malyalam
1
Explanation
If we can add ‘a’ to the initial string, we can create a palindrome.
madaam
1
Explanation
Either add ‘d’ or ‘a’ to make the original string palindrome.
Algorithm
- Set length of the string to l and output to 0.
- Declare an integer array.
- Store and maintain the count of each character in a string.
- Traverse the array starting from 0 to while i < 26.
- Check if countChar[i] % 2 == 1,
- If true, then do output++.
- If the output is equal to 0, then return 0.
- Else return output-1.
Explanation
You are given a string, your task given is to find out the minimum insertion to be done in a string so that it becomes a Palindrome. The position of characters can be changed in a string. We are going to count the occurrence of the character of a string and store it to an array. Because the idea behind is that when a string is a palindrome, there is only a single character that can be odd when the string length is odd. Other than that all characters have even frequency. So we need to find characters that occur an odd number of times.
We are going to count every character in the input string and store it to an array. As we already mentioned, a string which is palindrome can only have one character which occurs odd number of times. So the output would be one less than the character count. After storing every character string occurrence in an array. We are then making an array traversing from i=0 to i is less than 26. This is because there are a total of 26 characters and we should suppose that there will be a probability of occurrence of 26 characters in a given string.
While traversing the array, we will check if dividing each count by 2 leaves a remainder 1 if it is true, then it will increase the count of output by 1(output++ ). After traversing an array, if count remains as zero, means we find nothing in character which is odd means the string is already palindrome, we will return 0 else we will return (output-1) as we already mentioned output will be one less than the character count and hence we got output.
Code
C++ code to find Minimum insertions to form a palindrome with permutations allowed
#include<iostream> using namespace std; int getMinimumInsertion(string str) { int l = str.length(),output = 0; int countChar[26] = { 0 }; for (int i = 0; i < l; i++) countChar[str[i] - 'a']++; for (int i = 0; i < 26; i++) if (countChar[i] % 2 == 1) output++; return (output == 0) ? 0 : output - 1; } int main() { string str = "malyalam"; cout << getMinimumInsertion(str); return 0; }
1
Java code to find Minimum insertions to form a palindrome with permutations allowed
class insertionToPalindrome { public static int getMinimumInsertion(String str) { int l = str.length(),output = 0; int countChar[] = new int[26]; for (int i = 0; i < l; i++) countChar[str.charAt(i) - 'a']++; for (int i = 0; i < 26; i++) { if (countChar[i] % 2 == 1) output++; } return (output == 0) ? 0 : output - 1; } public static void main(String[] args) { String str = "malyalam"; System.out.println(getMinimumInsertion(str)); } }
1
Complexity Analysis
Time Complexity
O(n) where “n” is the number of characters in the input string.
Space Complexity
O(1), because we have created an extra array having constant size. Thus the space complexity is constant.
|
https://www.tutorialcup.com/interview/string/minimum-insertions-to-form-a-palindrome-with-permutations-allowed.htm
|
CC-MAIN-2021-25
|
refinedweb
| 580
| 71.44
|
LineType
Since: BlackBerry 10.0.0
#include <bb/system/phone/LineType>
To link against this class, add the following line to your .pro file: LIBS += -lbbsystem
The types of phone lines available for making calls.
You must also specify the access_phone permission in your bar-descriptor.xml file.
Overview
Public Types Index
Public Types
Values describing the type of a phone line.
BlackBerry 10.0.0
- Invalid -1
The line is invalid.
- Cellular 0
The line is cellular.Since:
BlackBerry 10.0.0
- MVS 1
The line is MVS (Mobile Voice System).Since:
BlackBerry 10.0.0
- VideoChat 2
The line is video chat.Since:
BlackBerry 10.0.0
- SecuVOICE 3
The line is SecuVOICE (Secure Voice).Since:
BlackBerry 10.2
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/cascades/bb__system__phone__linetype.html
|
CC-MAIN-2016-50
|
refinedweb
| 137
| 64.57
|
This chapter covers the following topics:
Oracle BI Publisher can retrieve data from multiple types of data sources.
To create a new data set:
On the component pane of the data model editor click Data Sets.
Click New Data Set.
Select the data set type from the list to launch the appropriate dialog.
Complete the required fields to create the data set. See the corresponding section in this chapter for information on creating each data set type.
Click the New Data Set icon and then click SQL Query. The Create Data Set - SQL dialog launches.
Enter a name for this data set.
If you are not using the default data source for this data set, select the Data Source from the list.
Enter the SQL query or select Query Builder. See Using the Query Builder for information on the Query Builder utility.
If you are using Flexfields, bind variables, or other special processing in your query, edit the SQL returned by the Query Builder to include the required statements.
Note: If you include lexical references for text that you embed in a SELECT statement, then you must substitute values to get a valid SQL statement.
After entering the query, click OK to save. BI Publisher will validate the query.
Use the Query Builder to build SQL queries without coding. The Query Builder enables you to search and filter database objects, select objects and columns, create relationships between objects, and view formatted query results with minimal SQL knowledge.
The Query Builder page is divided into two sections:
Object Selection pane contains a list of objects from which you can build queries. Only objects in the current schema display.
Design and output pane consists of four tabs:
Model - displays selected objects from the Object Selection pane.
Conditions - enables you to apply conditions to your selected columns.
SQL - displays the query
Results - displays the results of the query
To build a query, perform the following steps:
Select objects from the Object Selection pane.
Add objects to the Design pane and select columns.
Optional: Establish relationships between objects.
Add a unique alias name for any duplicate column.
Optional: Create query conditions.
Execute the query and view results.
In the Object Selection pane you can select a schema and search and filter objects.
To hide the Object Selection pane, select the control bar located between it and the Design pane. Select it again to unhide it.
The Schema list contains all the available schemas in the data source. Note that you may not have access to all that are listed.
Use the Search field to enter a search string. Note that if more than 100 tables are present in the data source, you must use the Search feature to locate and select the desired objects.
The Object Selection pane lists the tables, views, and materialized views from the selected schema (for Oracle databases, synonyms are also listed). Select the object from the list and it displays on the Design pane. Use the Design pane to identify how the selected objects will be used in the query.
Columns of all types display as objects in the Design pane. Note the following column restrictions:
You can select no more than 60 columns for each query.
Only the following column types are selectable:
VARCHAR2, CHAR
NUMBER
DATE, TIMESTAMP
Note: The data type TIMESTAMP WITH LOCAL TIMEZONE is not supported.
BLOB
Note: The BLOB must be an image. When you execute the query in the Query Builder, the BLOB will not display in the Results pane, however, the query will be constructed correctly when saved to the data model editor.
Select an object.
The selected object displays in the Design pane. An icon representing the datatype displays next to each column name.
Select the check box for each column to include in your query.
When you select a column, it appears on the Conditions tab. Note that the Show check box on the Conditions tab controls whether a column is included in query results. Be default, this check box is selected.
To select the first twenty columns, click the small icon in the upper left corner of the object and then select Check All.
To execute the query and view results, select Results.
Tip: You can also execute a query using the key strokes CTRL + ENTER.
As you select objects, you can resize the Design and Results panes by selecting and dragging the gray horizontal rule dividing the page.
To remove an object, select the Remove icon in the upper right corner of the object.
To temporarily hide the columns within an object, click the Show/Hide Columns icon.
Conditions enable you to filter and identify the data you want to work with. As you select columns within an object, you can specify conditions on the Conditions tab. You can use these attributes to modify the column alias, apply column conditions, sort columns, or apply functions. The following figure shows the Conditions tab:
The following table describes the attributes available on the Conditions tab:
As you select columns and define conditions, Query Builder writes the SQL for you.
To view the underlying SQL, click the SQL tab
You can create relationships between objects by creating a join. A join identifies a relationship between two or more tables, views, or materialized views.
When you write a join query, you specify a condition that conveys a relationship between two objects. This condition is called a join condition. A join condition determines how the rows from one object will combine with the rows from another object.
Query Builder supports inner, outer, left, and right joins. An inner join (also called a simple join) returns the rows that satisfy the join condition. An outer join extends the result of a simple join. An outer join returns all rows that satisfy the join condition and returns some or all of those rows from one table for which no rows from the other satisfy the join condition.
Note: See Oracle Database SQL Reference for information about join conditions.
Create a join manually by selecting the Join column in the Design pane.
From the Object Selection pane, select the objects you want to join.
Identify the columns you want to join.
You create a join by selecting the Join column adjacent to the column name. The Join column displays to the right of the datatype. When your cursor is in the appropriate position, the following help tip displays:
Select the appropriate Join column for the first object.
When selected, the Join column is darkened. To deselect a Join column, simply select it again or press ESC.
Select the appropriate Join column for the second object.
When joined, line connects the two columns. An example is shown in the following figure:
Select the columns to be included in your query. You can view the SQL statement resulting from the join by positioning the cursor over the join line.
Click Results to execute the query.
Once you have built the query, click Save to return to the data model editor. The query will appear in the SQL Query box. Click OK to save the data set.
To link the data from this query to the data from other queries or modify the output structure, see Structuring Data.
Now you have your basic query, but in your report you want your users to be able to pass a parameter to the query to limit the results. For example, in the employee listing, you want users to be able to choose a specific department.
To do this, add the following after the where clause in your query:
and "DEPARTMENT_NAME" in (:P_DEPTNAME)
where P_DEPTNAME is the name you choose for the parameter. This is shown in the following figure:
When you select Save the data model editor will ask if you want to create the parameter you entered with the bind variable syntax:
Click OK to have the data model editor create the parameter entry for you.
To add the parameter, see Adding Parameters and Lists of Values.
Important: After manually editing the query, the Query Builder will no longer be able to parse it. Any further edits must also be made manually.
Once you have saved the query from the Query Builder to the data model editor, you may also use the Query Builder to edit the query:
Select the SQL data set.
Click the Edit Selected Data Set toolbar button.
This launches the Edit Data Set dialog. Click Query Builder to load the query to the Query Builder.
Note: If you have made modifications to the query, or did not use the Query Builder to construct it, you may receive an error when launching the Query Builder to edit it. If the Query Builder cannot parse the query, you can edit the statements directly in the text box.
Edit the query and click Save.
BI Publisher supports Multidimensional Expressions (MDX) queries against your OLAP data sources. MDX lets you query multidimensional objects, such as cubes, and return multidimensional cellsets that contain the cube's data. See your OLAP database documentation for information on the MDX syntax and functions it supports.
Note: Ensure that in your OLAP data source that you do not use Unicode characters from the range U+F900 to U+FFFE to define any metadata attributes such as column names or table names. This Unicode range includes half-width Japanese Katakana and full-width ASCII variants. Using these characters will result in errors when generating the XML data for a BI Publisher report.
Click the New Data Set toolbar button and select OLAP. The Create Data Set - OLAP dialog launches.
Enter a name for this data set.
Select the Data Source for this data set. Only data sources defined as OLAP connections will display in the list.
Enter the MDX query by direct entry or by copying and pasting from a third-party MDX editor.
Click OK.
To link the data from this query to the data from other queries or modify the output structure, see Creating Structured XML Data Sets.
BI Publisher supports queries against Lightweight Directory Access protocol (LDAP) data sources. You can query user information stored in LDAP directories and then use the data model editor to link the user information with data retrieved from other data sources.
For example, suppose you want to generate a report that lists employee salary information that is stored in your database application and include on the report employee e-mail addresses that are stored in your LDAP directory. You can create a query against each and then link the two in the data model editor to display the information in a single report.
Click the New Data Set toolbar button and select LDAP. The Create Data Set - LDAP dialog launches.
Enter a name for this data set.
Select the Data Source for this data set. Only data sources defined as LDAP connections will display in the list.
In the Attributes entry box, enter the attributes whose values you want to fetch from the LDAP data source.
For example:
mail,cn,givenName
To filter the query, enter the appropriate syntax in the Filter entry box. The syntax is as follows: .
(Operator (Filter) (Filter))
For example:
(objectclass=person)
LDAP search filters are defined in the Internet Engineering Task Force (IETF) Request for Comments document 2254, "The String Representation of LDAP Search Filters," (RFC 2254). This document is available from the IETF Web site at
To link the data from this query to the data from other queries or modify the output structure, see Structuring Data.
To use a Microsoft Excel file as a data source, place the file in a directory that your administrator has set up as a data source (see Setting Up a Connection to a File Data Source, Oracle Fusion Middleware Administrator's and Developer's Guide for Oracle Business Intelligence Publisher).
The Microsoft Excel files must be saved in the Excel 97-2003 Workbook (*.xls) format.
Following are guidelines for the support of Microsoft Excel files as a data set type in BI Publisher:
The source Excel file may contain a single sheet or multiple sheets.
Each worksheet may contain one or multiple tables. A table is a block of data that is located in the continuous rows and columns of a sheet.
In each table, BI Publisher always considers the first row to be a heading row for the table.
The data type of the data in the table may be number, text, or date/time.
If multiple tables exist in a single worksheet, the tables must be identified with a name for BI Publisher to recognize each one. See Guidelines for Accessing Multiple Tables per Sheet.
If all tables in the Excel file are not named, only the data in the first table (the table located in the upper most left corner) will be recognized and fetched.
If your Excel worksheet contains multiple tables that you wish to include as data sources, you must define a name for the table in Excel.
Important: The name that you define must begin with the prefix: "BIP_", for example, "BIP_SALARIES".
To define a name for the table in Excel:
Insert the table in Excel.
Define a name for the table as follows:
Using Excel 2003: Select the table. On the Insert menu, click Name and then Define.
Using Excel 2007: Select the table. On the Formulas tab, in the Defined Names group, click Define Name, then enter the name in the Name field. The name you enter will then appear on the Formula bar.
Tip: You can learn more about defined names and their usage in the Microsoft Excel 2007 document: "Define and use names in formulas."
The following figure shows how to use the Define Name command in Microsoft Excel 2007 to name a table "BIP_Salaries".
Note that if you want to include parameters for your data set, you must define the parameters first, so that they are available for selection when defining the data set. See Adding Parameters and Lists of Values.
Important: The Excel data set type supports one value per parameter. It does not support multiple selection for parameters.
Click the New Data Set toolbar button and select Microsoft Excel File. The Create Data Set - Excel dialog launches.
Enter a name for this data set.
Select the Data Source where the Excel File resides.
Click the browse icon to connect to browse for and select the Microsoft Excel file.
If the Excel file contains multiple sheets or tables, select the appropriate Sheet Name and Table Name for this data set.
If you added parameters for this data set, click Add Parameter. Enter the Name and select the Value. The Value list is populated by the parameter Name defined in the Parameters section. See Adding Parameters and Lists of Values.
Click OK.
To link the data from this query to the data from other queries or modify the output structure, see Structuring Data.
If you have enabled integration with Oracle Business Intelligence, then you can access the Oracle Business Intelligence Presentation catalog to select an Oracle BI analysis as a data source. An analysis is a query against an organization's data that provides answers to business questions. A query contains the underlying SQL statements that are issued to the Oracle BI Server.
For more information on creating analyses, see the Oracle Fusion Middleware User's Guide for Oracle Business Intelligence Enterprise Edition.
Click the New Data Set toolbar button and select Oracle BI Analysis. The Create Data Set - Oracle BI Analysis dialog launches.
Enter a name for this data set.
Click the browse icon to connect to the Oracle BI Presentation catalog.
When the catalog connection dialog launches, navigate through the folders to select the Oracle BI analysis you wish to use as the data set for your report.
Enter a Time Out value in seconds. If BI Publisher has not received the analysis data after the time specified in the time out value has elapsed, BI Publisher will stop attempting to retrieve the analysis data.
Click OK.
Parameters and list of values will be inherited from the BI analysis and they will show up at runtime.
The BI Analysis must have default values defined for filter variables. If the analysis contains presentation variables with no default values, it is not supported as a data source by BI Publisher.
If you wish to structure the data based on Oracle BI Analysis Data Sets, the group breaks, data links and group-level functions are not supported
The following are supported:
Global level functions
Setting the value for elements if null
Group Filters
BI Publisher enables you to connect to your custom applications built with Oracle Application Development Framework and use view objects in your applications as data sources for reports.
This procedure assumes that you have created a view object in your application.
Click the New Data Set toolbar button and select View Object. The Create Data Set - View Object dialog launches.
Enter a name for this data set.
Select the Data Source from the list. The data sources that you defined in the providers.xml file will display.
Enter the fully qualified name of the application module (for example: example.apps.pa.entity.applicationModule.AppModuleAM).
Click Load View Objects.
BI Publisher calls the application module to load the view object list.
Select the View Object.
Any bind variables defined will be retrieved. Create a parameter to map to this bind variable See Adding Parameters and Lists of Values.
Click OK to save your data set.
If you wish to structure data based on view object data sets, the group breaks, data links and group-level functions are not supported.
The following are supported:
Global level functions
Setting the value for elements if null
Group Filters
BI Publisher supports Web service data sources that return valid XML data.
Important: Additional configuration may be required to access external Web services depending on your system's security. If the WSDL URL is outside your company firewall, see Configuring Proxy Settings, Oracle Fusion Middleware Administrator's and Developer's Guide for Oracle Business Intelligence Publisher.
If the Web service is protected by Secure Sockets Layer (SSL) see Configuring BI Publisher for Secure Socket Layer Communication, Oracle Fusion Middleware Administrator's and Developer's Guide for Oracle Business Intelligence Publisher.
BI Publisher supports Web services that return both simple data types and complex data types. You must make the distinction between simple and complex when you define the Web service data model. See Adding a Simple Web Service and Adding a Complex Web Service for descriptions of setting up each type.
Note that if you want to include parameters for the Web service method, you must define the parameters first, so that they are available for selection when setting up the data source. See Adding Parameters and Lists of Values.
Multiple parameters are supported. Ensure the method name is correct and the order of the parameters matches the order in the method. If you want to call a method in your Web service that accepts two parameters, you must map two parameters defined in the report to those two. Note that only parameters of simple type are supported, for example, string and integer.
Enter the WSDL URL and the Web Service Method.
Important: Only document/literal Web services are supported.
To specify a parameter, select the Add link. Select the parameter from the list.
Note: The parameters must already be set up in the Parameters section of the report definition See Adding Parameters and Lists of Values.
This example shows how to add a Web service to BI Publisher as a data source. The Web service returns stock quote information. The Web service will pass one parameter: the quote symbol for a stock.
The WSDL URL is:
If you are not already familiar with the available methods and parameters in the Web service that you want to call, you can open the URL in a browser to view them. This Web service includes a method called GetQuote. It takes one parameter, which is the stock quote symbol.
To add the Web service as a data source:
Click the New Data Set toolbar button and select Web Services. The Create Data Set - Web Service dialog launches.
Enter a name for this data set.
Enter the Data Set information:
Select False for Complex Type.
Enter the WSDL URL:
Enter the Method: GetQuote
If desired, enter a Time Out period in seconds. If the BI Publisher server cannot establish a connection to the Web service, the connection attempt will time out after the specified time out period has elapsed.
Define the parameter to make it available to the Web service data set.
Select Parameters on the Data Model pane and click the Create New Parameter button. Enter the following:
Identifier - enter an internal identifier for the parameter (for example, Quote).
Data Type - select String.
Default Value - if desired, enter a default for the parameter (for example, ORCL).
Parameter Type - select Text
In the Text Setting region, enter the following:
Display label - enter the label you want displayed for your parameter (for example: Stock Symbol).
Text Field Size - enter the size for the text entry field in characters.
Select the options you wish to apply:
Text field contains comma-separated values - select this option to enable the user to enter multiple comma-separated values for this parameter.
Refresh other parameters on change - performs a partial page refresh to refresh any other parameters whose values are dependent on the value of this one.
Return to your Web service data set and add the parameter.
Click the data set name Stock Quote. Click Add Parameter. The Quote parameter you specified is now available from the list.
Click the Edit Selected Data Set button.
In the Edit Data Set dialog, click Add Parameter. The Quote parameter will display.
Click OK to close the data set.
Click Save.
To view the results XML, select Get XML Output.
Enter a valid value for your Stock Symbol parameter, select the number of rows to return, and click the Run button.
A complex Web service type internally uses soapRequest / soapEnvelope to pass the parameter values to the destination host.
To use a complex Web service as a data source, select Complex Type equal True, then enter the WSDL URL. After loading and analyzing the WSDL URL, the Data Model Editor will display the available Web services and operations. For each selected operation, the Data Model Editor will display the structure of the required input parameters. By choosing Show Optional Parameters, you can see all optional parameters as well.
If you are not already familiar with the available methods and parameters in the Web service that you want to call, open the WSDL URL in a browser to view them.
To add a complex Web service as a data source:
Enter the Data Set information:
Enter a Name for the Data Set and select Web Service as the Type.
Select True for Complex Type.
Select a security header:
Disabled - does not insert a security header.
2002 - enables the "WS-Security" Username Token with the 2002 namespace:
2004 - enables the "WS-Security" Username Token with the 2004 namespace:
Username and Password - enter the username and password for the Web service, if required.
If desired, enter a Time Out period in seconds. If the BI Publisher server cannot establish a connection to the Web service, the connection attempt will time out after the specified time out period has elapsed.
Enter a WSDL URL. When you enter the WSDL, the Web Service list will populate with the available Web services from the WSDL.
Choose a Web Service from the list. When you choose a Web service from the list, the Method list will populate with the available methods.
Select the Method. When you select the method, the Parameters will display. If you wish to see optional parameters as well, select Show Optional Parameters.
Response Data XPath - if the start of the XML data for your report is deeply embedded in the response XML generated by the Web service request, use this field to specify the path to the data that you wish to use in your BI Publisher report.
Define the parameter to make it available to the Web service data set.
Select Parameters on the Report definition pane and click New to create a new parameter. Enter the following:
Name - enter an internal identifier for the parameter.
Data Type - select the appropriate data type for the parameter.
Default Value - if desired, enter a default value for the parameter.
Parameter Type - select the appropriate parameter type.
Display label - enter the label you want displayed for your parameter.
Text Field Size - enter the size for the text entry field in characters.
Return to your Web service data set and add the parameter.
Select the Web service data set and then click
Edit Selected Data Set to launch the Edit Data Set dialog.
In the entry field for the Parameter, enter the following syntax: ${Parameter_Name} where Parameter_Name is the value you entered for Name when you defined the parameter to BI Publisher.
To test the Web service, see Testing Data Models and Generating Sample Data.
There is no metadata available from Web service data sets.
When you set up data sources you can define a file directory as a data source (see Setting Up a Connection to a File Data Source, Oracle Fusion Middleware Administrator's and Developer's Guide for Oracle Business Intelligence Publisher). You can then place XML documents in the file directory to access directly as data sources for your reports.
Click the Create new toolbar button and select XML. The Create Data Set - File dialog launches.
Enter a name for this data set.
Select the Data Source where the XML file resides. The list is populated from the configured File Data Source connections.
Click Browse to connect to the data source and browse the available directories. Select the file to use for this report.
Click OK.
There is no metadata available from file data sets.
Using the HTTP data source type you can create reports from RSS feeds over the Web.
Important: Additional configuration may be required to access external data source feeds depending on your system's security. If the RSS feed is protected by Secure Sockets Layer (SSL) see Configuring BI Publisher for Secure Sockets Layer Communication, Oracle Fusion Middleware Administrator's and Developer's Guide for Oracle Business Intelligence Publisher.
Note that if you want to include parameters for an HTTP (XML feed), you must define the parameters first, so that they are available for selection when defining the data set. See Adding Parameters and Lists of Values.
Click the New Data Set toolbar button and select HTTP. The Create Data Set - HTTP dialog launches.
Enter a name for this data set.
Enter the URL for the XML feed.
Select the Method: Get or Post.
Enter the Username, Password, and Realm for the URL, if required.
To add a parameter, click Add Parameter. Enter the Name and select the Value. The Value list is populated by the parameter Name defined in the Parameters section. See Adding Parameters and Lists of Values.
Click OK to close the data set dialog.
There is no metadata available from HTTP data sets.
The Data Model Editor enables you to test your data model and view the output to ensure your results are as expected. After running a successful test, you can choose to save the test output as sample data for your data model, or export the file to an external location. If your data model fails to run, you can view the data engine log.
To test your data model:
Click the Get XML Output toolbar button. This will launch the XML Output page.
Select the number of rows to return. If you included parameters, enter the desired values for the test.
Click Run to display the XML returned by your data model.
To save your test data set as sample data for your data model:
After your data model has successfully run, click the Options toolbar button and then click Save as Sample Data. This sample data will be saved to your data model.
To export the test data:
After your data model has successfully run, select the Options toolbar button and then select Export XML. You will be prompted to save the file.
To view the data engine log:
Select the Options toolbar button and then select Get Data Engine Log. You will be prompted to open or save the file. The data engine log file is an XML file.
BI Publisher stores information about the current user that can be accessed by your report data model. The user information is stored in system variables as follows:
To add the user information to your data model, you can define the variables as parameters and then define the parameter value as an element in your data model. Or, you can simply add the variables as parameters then reference the parameter values in your report.
The following example limits the data returned by the user ID:
select EMPLOYEES.LAST_NAME as LAST_NAME, EMPLOYEES.PHONE_NUMBER as PHONE_NUMBER, EMPLOYEES.HIRE_DATE as HIRE_DATE, :xdo_user_name as USERID from HR.EMPLOYEES EMPLOYEES where lower(EMPLOYEES.LAST_NAME) = :xdo_user_name
Notice the use of the lower() function , the xdo_user_name will always be in lowercase format. BI Publisher does not have a user_id so you need to go with the user name and either use it directly in the query or maybe go against a lookup table to find a user id.
Copyright © 2004, 2010, Oracle and/or its affiliates. All rights reserved.
|
http://docs.oracle.com/cd/E17904_01/bi.1111/e13881/T527073T527080.htm
|
CC-MAIN-2014-35
|
refinedweb
| 4,943
| 56.35
|
Perl Programming/Objects< Perl Programming
ObjectsEdit
When Perl was initially developed, there was no support at all for object-orientated (OO) programming. Since Perl 5, OO has been added using the concept of Perl packages (namespaces), an operator called bless, some magic variables (@ISA, AUTOLOAD, UNIVERSAL), the -> and some strong conventions for supporting inheritance and encapsulation.
An object is created using the package keyword. All subroutines declared in that package become object or class methods.
A class instance is created by calling a constructor method that must be provided by the class, by convention this method is called new()
Let's see this constructor.
package Object; sub new { return bless {}, shift; } sub setA { my $self = shift; my $a = shift; $self->{a}=$a; } sub getA { my $self = shift; return $self->{a}; }
Client code can use this class something like this.
my $o = Object->new; $o->setA(10); print $o->getA;
This code prints 10.
Let's look at the new contructor in a little more detail:
The first thing is that when a subroutine is called using the -> notation a new argument is pre-pended to the argument list. It is a string with either the name of the package or a reference to the object (Object->new() or $o->setA. Until that makes sense you will find OO in Perl very confusing.
To use private variables in objects and have variables names check, you can use a little different approach to create objects.
package my_class; use strict; use warnings; { # All code is enclosed in block context my %bar; # All vars are declared as hashes sub new { my $class = shift; my $this = \do{ my $scalar }; # object is a reference to scalar (inside out object) bless $this, $class; return $this; } sub set_bar { my $this = shift; $bar{$this} = shift; } sub get_bar { my $this = shift; return $bar{$this}; } }
Now you have good encapsulation - you cannot access object variables directly via $o->{bar}, but only using set/get methods. It's also impossible to make mistakes in object variable names, because they are not a hash-keys but normal perl variables, needed to be declared.
We use them the same way like hash-blessed objects:
my $o = my_class->new(); $o->set_bar(10); print $o->get_bar();
prints 10
|
https://en.m.wikibooks.org/wiki/Perl_Programming/Objects
|
CC-MAIN-2016-40
|
refinedweb
| 372
| 54.76
|
You can subscribe to this list here.
Showing
4
results of 4
On Wed, Jan 17, 2007 at 06:49:08AM +0000, Tim Bradshaw wrote:
> sudo is obviously a better answer than SUID programs, but I imagine
> that whatever braindamage is breaking SUID programs should also break
> anything running with UID 0, since it's the UID being 0 that is the
> issue.
No, only execing a setuid file clears the personality bit. The code
that actually does this is here:
-bcd
On 15 Jan 2007, at 21:33, Brian Mastenbrook wrote:
>
> This may be a dumb question, but why use a suid root SBCL instead
> of sudo
> with a restricted configuration file? sudo allows much more fine-
> grained
> control of execute permissions than the suid bit does.
sudo is obviously a better answer than SUID programs, but I imagine
that whatever braindamage is breaking SUID programs should also break
anything running with UID 0, since it's the UID being 0 that is the
issue.
--tim
Jon Buller <jon@...> writes:
> Richard M Kreuter wrote:
>> The following message is a courtesy copy of an article
>> that has been posted to gmane.lisp.steel-bank.devel as well.
>>
>> Jon Buller <jon@...> writes:
>
> [ problem with READDIR/DIRENT-NAME test failing in sb-posix deleted... ]
>
>> This is the same sort of thing that made the stat wrappers not work...
>
> I saw those messages you wrote about stat, and didn't think about that
> when looking at this. Thanks for the extra eyes and brains.
It looks like they've done the same for socket(), too, though in this
case, the only difference is in the value of errno after certain bogus
socket() calls. Eventually, maybe every system call on NetBSD will
need this treatment!
This is a pretty minor nit, but in case my previous patch hasn't been
merged yet, please use the following instead.
Thanks,
RmK
Index: src/runtime/bsd-os.c
===================================================================
RCS file: /cvsroot/sbcl/sbcl/src/runtime/bsd-os.c,v
retrieving revision 1.45
diff -u -r1.45 bsd-os.c
--- src/runtime/bsd-os.c 3 Jan 2007 20:42:33 -0000 1.45
+++ src/runtime/bsd-os.c 17 Jan 2007 02:36:12 -0000
@@ -52,7 +52,8 @@
#include <sys/sysctl.h>
#include <string.h>
#include <sys/stat.h> /* For the stat-family wrappers. */
-
+#include <dirent.h> /* For the opendir()/readdir() wrappers */
+#include <sys/socket.h> /* For the socket() wrapper */
static void netbsd_init();
#endif /* __NetBSD__ */
@@ -327,24 +328,44 @@
}
}
-/* The stat() routines in NetBSD's C library are compatibility
- wrappers for some very old version of the stat buffer structure.
- Programs must be processed by the C toolchain in order to get an
- up-to-date definition of the stat() routine. These wrappers are
- used only in sb-posix, as of 2006-10-15. -- RMK */
-int _stat(const char *path, struct stat *sb) {
- return (stat(path, sb));
+/* Various routines in NetBSD's C library are compatibility wrappers
+ for old versions. Programs must be processed by the C toolchain in
+ order to get up-to-date definitions of such routines. */
+/* The stat-family, opendir, and readdir are used only in sb-posix, as
+ of 2007-01-16. -- RMK */
+int
+_stat(const char *path, struct stat *sb)
+{
+ return stat(path, sb);
}
-
-int _lstat(const char *path, struct stat *sb) {
- return (lstat(path, sb));
+int
+_lstat(const char *path, struct stat *sb)
+{
+ return lstat(path, sb);
}
-
-int _fstat(int fd, struct stat *sb) {
- return (fstat(fd, sb));
+int
+_fstat(int fd, struct stat *sb)
+{
+ return fstat(fd, sb);
}
+DIR *
+_opendir(const char *filename)
+{
+ return opendir(filename);
+}
+struct dirent *
+_readdir(DIR *dirp)
+{
+ return readdir(dirp);
+}
+/* Used in sb-bsd-sockets. */
+int
+_socket(int domain, int type, int protocol)
+{
+ return socket(domain, type, protocol);
+}
#endif /* __NetBSD__ */
#ifdef __FreeBSD__
Index: contrib/sb-posix/interface.lisp
===================================================================
RCS file: /cvsroot/sbcl/sbcl/contrib/sb-posix/interface.lisp,v
retrieving revision 1.35
diff -u -r1.35 interface.lisp
--- contrib/sb-posix/interface.lisp 15 Jan 2007 22:09:11 -0000 1.35
+++ contrib/sb-posix/interface.lisp 17 Jan 2007 02:36:12 -0000
@@ ).
Index: contrib/sb-bsd-sockets/constants.lisp
===================================================================
RCS file: /cvsroot/sbcl/sbcl/contrib/sb-bsd-sockets/constants.lisp,v
retrieving revision 1.13
diff -u -r1.13 constants.lisp
--- contrib/sb-bsd-sockets/constants.lisp 14 Apr 2006 07:23:06 -0000 1.13
+++ contrib/sb-bsd-sockets/constants.lisp 17 Jan 2007 02:36:12 -0000
@@ -152,7 +152,7 @@
((* t) control "void *" "msg_control")
(integer controllen "socklen_t" "msg_controllen")
(integer flags "int" "msg_flags")))
- (:function socket ("socket" int
+ (:function socket (#-netbsd "socket" #+netbsd "_socket" int
(domain int)
(type int)
(protocol int)))
|
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200701&viewday=17
|
CC-MAIN-2015-06
|
refinedweb
| 771
| 57.57
|
intermediate level candidate, if you are more experienced then the questions listed here may not be suitable for your interview. Based on the feedback from the readers, we are adding more questions into the list and updating the questions with the most appropriate answers. We are looking for the feedback on the questions in this article. If you have any questions, please post it in the comments section. We will answer your questions immediately. The following are some of the other questions which may be useful for you:
Java Interview Questions
1) What is the difference between an Abstract class and Interface?
- Abstract classes may have some executable methods and methods left unimplemented. Interfaces contain no implementation code.
- An class can implement any number of interfaces, but subclass at most one abstract class.
- An abstract class can have non abstract methods. All methods of an interface are abstract.
- An abstract class can have instance variables. An interface cannot.
- An abstract class can define constructor. An interface cannot.
- An abstract class can have any visibility: public, protected, private or none (package). An interface’s visibility must be public or none (package).
- An abstract class inherits from Object and includes methods such as clone() and equals().
2) What are checked and unchecked exceptions?.
3) What is a user defined exception?
User-defined exceptions may be implemented by
- defining a class to respond to the exception and
-
4) What is the difference between C++ & Java? inheritance
- There are no destructors in Java
5) What are statements in JAVA ?
Statements are equivalent to sentences in natural languages. A statement forms a complete unit of execution. The following types of expressions can be made into a statement by terminating the expression with a semicolon
- Assignment expressions
- Any use of ++ or —
- Method calls
-.
6).
7)What is JNI?
JNI is an acronym of Java Native Interface. Using JNI we can call functions which are written in other languages from Java. Following are its advantages and disadvantages.
Advantages:
- You want to use your existing library which was previously written in other language.
- You want to call Windows API function.
- For the sake of execution speed.
- You want to call API function of some server product which is in c or c++ from java client.
Disadvantages:
- You can’t say write once run anywhere.
- Difficult to debug runtime error in native code.
- Potential security risk.
- You can’t call it from Applet.
8).-versa.
14) Why java does not have multiple inheritance?
The Java design team strove to make Java:
- Simple, object oriented, and familiar
- Robust and secure
- Architecture neutral and portable
- High performance
-...
18).
19) Is Iterator a Class or Interface? What is its use?
Iterator is an interface which is used to step through the elements of a Collection.
20) What is similarities/difference between an Abstract class and Interface?.
21) What is a transient variable?
A transient variable is a variable that may not be serialized.
22) Which containers use a border Layout as their default layout?
The window, Frame and Dialog classes use a border layout as their default layout.
23) Why do threads block on I/O?
Threads block on i/o (that is enters the waiting state) so that other threads may execute while the i/o Operation is performed.
24).-finally statement?
The finally clause is used to provide the capability to execute code no matter whether or not an exception is thrown or caught.
81) What is the argument type of a program’s main() method?
A program’s main() method takes an argument of the String[] type.
82) Which Java operator is right associative?
The = operator is right associative.
83) What is the Locale class?
The Locale class is used to tailor program output to the conventions of a particular geographic, political, or cultural region.
84) Can a double value be cast to a byte?
Yes, a double value can be cast to a byte.
85) What is the difference between a-static.
121) What is the purpose of the System class?
The purpose of the System class is to provide access to system resources.
122) Which TextComponent method is used to set a TextComponent to the read-only state?
setEditable()
123) How are the elements of a CardLayout organized?
The elements of a CardLayout are stacked, one on top of the other, like a deck of cards.
124) Is &&= a valid Java operator?
No, it is not.
125) Name the eight primitive Java types?
The eight primitive types are byte, char, short, int, long, float, double, and boolean.
126) Which class should you use to obtain design information about an object?
The Class class is used to obtain information about an object’s design.
127) What is the relationship between clipping and repainting?
When a window is repainted by the AWT painting thread, it sets the clipping regions to the area of the window that requires repainting.
128) Is “abc” a primitive value?
The String literal “abc” is not a primitive value. It is a String object.
129) What is the relationship between an event-listener interface and an event-adapter class?
An event-listener interface defines the methods that must be implemented by an event handler for a particular kind of event. An event adapter provides a default implementation of an event-listener interface.
130) What restrictions are placed on the values of each case of a switch statement?
During compilation, the values of each case of a switch statement must evaluate to a value that can be promoted to an int value.
131) What modifiers may be used with an interface declaration?
An interface may be declared as public or abstract.
132) Is a class a subclass of itself?
A class is a subclass of itself.
133) What is the highest-level event class of the event-delegation model?
The java.util.EventObject class is the highest-level class in the event-delegation class hierarchy.
134) What event results from the clicking of a button?
The ActionEvent event is generated as the result of the clicking of a button.
135) How can a GUI component handle its own events?
A component can handle its own events by implementing the required event-listener interface and adding itself as its own event listener.
136).
137).
138).
139) What is the Collection interface?
The Collection interface provides support for the implementation of a mathematical bag – an unordered collection of objects that may contain duplicates.
140) What modifiers can be used with a local inner class?
A local inner class may be final or abstract.
141) What is the difference between static and non-static variables?
A static variable is associated with the class as a whole rather than with specific instances of a class. Non-static variables take on unique values with each object instance.
142) What is the difference between the paint() and repaint() methods?
The paint() method supports painting via a Graphics object. The repaint() method is used to cause paint() to be invoked by the AWT painting thread.
143) What is the purpose of the File class?
The File class is used to create objects that provide access to the files and directories of a local file system.
144) Can an exception be rethrown?
Yes, an exception can be rethrown.
145) Which Math method is used to calculate the absolute value of a number?
The abs() method is used to calculate absolute values.
146) How does multithreading take place on a computer with a single CPU?
The operating system’s task scheduler allocates execution time to multiple tasks. By quickly switching between executing tasks, it creates the impression that tasks execute sequentially.
147) When does the compiler supply a default constructor for a class?
The compiler supplies a default constructor for a class if no other constructors are provided.
148) When is the finally clause of a try-catch-finally statement executed?
The finally clause of the try-catch-finally statement is always executed unless the thread of execution terminates or an exception occurs within the execution of the finally clause.
149) Which class is the immediate superclass of the Container class?
Component
150) If a method is declared as protected, where may the method be accessed?
A protected method may only be accessed by classes or interfaces of the same package or by subclasses of the class in which it is declared.
151) How can the Checkbox class be used to create a radio button?
By associating Checkbox objects with a CheckboxGroup.
152) Which non-Unicode letter characters may be used as the first character of an identifier?
The non-Unicode letter characters $ and _ may appear as the first character of an identifier
153) What restrictions are placed on method overloading?
Two methods may not have the same name and argument list but different return types.
154) What happens when you invoke a thread’s interrupt method while it is sleeping or waiting?
When a task’s interrupt() method is executed, the task enters the ready state. The next time the task enters the running state, an InterruptedException is thrown.
155).
156) What is the return type of a program’s main() method?
A program’s main() method has a void return type.
157) Name four Container classes.
Window, Frame, Dialog, FileDialog, Panel, Applet, or ScrollPane
158).
159) What class of exceptions are generated by the Java run-time system?
The Java runtime system generates RuntimeException and Error exceptions.
160) What class allows you to read objects directly from a stream?
The ObjectInputStream class supports the reading of objects from input streams.
161) What is the difference between a field variable and a local variable?
A field variable is a variable that is declared as a member of a class. A local variable is a variable that is declared local to a method.
162) Under what conditions is an object’s finalize() method invoked by the garbage collector?
The garbage collector invokes an object’s finalize() method when it detects that the object has become unreachable.
163) How are this() and super() used with constructors?
this() is used to invoke a constructor of the same class. super() is used to invoke a superclass constructor.
164) What is the relationship between a method’s throws clause and the exceptions that can be thrown during the method’s execution?
A method’s throws clause must declare any checked exceptions that are not caught within the body of the method.
165).
166).
167) Why are the methods of the Math class static?
So they can be invoked as if they are a mathematical code library.
168) What Checkbox method allows you to tell if a Checkbox is checked?
getState()
169) What state is a thread in when it is executing?
An executing thread is in the running state.
170) What are the legal operands of the instanceof operator?
The left operand is an object reference or null value and the right operand is a class, interface, or array type.
171) How are the elements of a GridLayout organized?
The elements of a GridBad layout are of equal size and are laid out using the squares of a grid.
172) What an I/O filter?
An I/O filter is an object that reads from one stream and writes to another, usually altering the data in some way as it is passed from one stream to another.
173) If an object is garbage collected, can it become reachable again?
Once an object is garbage collected, it ceases to exist. It can no longer become reachable again.
174) What is the Set interface?
The Set interface provides methods for accessing the elements of a finite mathematical set. Sets do not allow duplicate elements.
175) What classes of exceptions may be thrown by a throw statement?
A throw statement may throw any expression that may be assigned to the Throwable type.
176) What are E and PI?
E is the base of the natural logarithm and PI is mathematical value pi.
177) Are true and false keywords?
The values true and false are not keywords.
178) What is a void return type?
A void return type indicates that a method does not return a value.
179).
180) What is the difference between the File and RandomAccessFile classes?
The File class encapsulates the files and directories of the local file system. The RandomAccessFile class provides the methods needed to directly access data contained in any part of a file.
181) What happens when you add a double value to a String?
The result is a String object.
182) What is your platform’s default character encoding?
If you are running Java on English Windows platforms, it is probably Cp1252. If you are running Java on English Solaris platforms, it is most likely 8859_1..
183) Which package is always imported by default?
The java.lang package is always imported by default.
184) What interface must an object implement before it can be written to a stream as an object?
An object must implement the Serializable or Externalizable interface before it can be written to a stream as an object.
185) How are this and super used?
this is used to refer to the current object instance. super is used to refer to the variables and methods of the superclass of the current object instance.
186) What is the purpose of garbage collection?
The purpose of garbage collection is to identify and discard objects that are no longer needed by a program so that their resources may be reclaimed and reused.
187) What is a compilation unit?
A compilation unit is a Java source code file.
188) What interface is extended by AWT event listeners?
All AWT event listeners extend the java.util.EventListener interface.
189) What restrictions are placed on method overriding?
- Overridden methods must have the same name, argument list, and return type.
- The overriding method may not limit the access of the method it overrides.
- The overriding method may not throw any exceptions that may not be thrownby the overridden method.
190) How can a dead thread be restarted?
A dead thread cannot be restarted.
191) What happens if an exception is not caught?
An uncaught exception results in the uncaughtException() method of the thread’s ThreadGroup being invoked, which eventually results in the termination of the program in which it is thrown.
192) What is a layout manager?
A layout manager is an object that is used to organize components in a container.
193) Which arithmetic operations can result in the throwing of an ArithmeticException?
Integer / and % can result in the throwing of an ArithmeticException.
194).
195) Can an abstract class be final?
An abstract class may not be declared as final.
196) What is the ResourceBundle class?
The ResourceBundle class is used to store locale-specific resources that can be loaded by a program to tailor the program’s appearance to the particular locale in which it is being run.
197).
198).
199) What is the difference between a Scrollbar and a ScrollPane?
A Scrollbar is a Component, but not a Container. A ScrollPane is a Container. A ScrollPane handles its own events and performs its own scrolling.
200) What is the difference between a public and a non-public class?
A public class may be accessed outside of its package. A non-public class may not be accessed outside of its package.
201) To what value is a variable of the boolean type automatically initialized?
The default value of the boolean type is false.
202) Can try statements be nested?
Try statements may be tested.
203).
204) What is the purpose of a statement block?
A statement block is used to organize a sequence of statements as a single statement group.
205).
206) What modifiers may be used with a top-level class?
A top-level class may be public, abstract, or final.
207) What are the Object and Class classes used for?
The Object class is the highest-level class in the Java class hierarchy. The Class class is used to represent the classes and interfaces that are loaded by a Java program..
208).
209) Can an unreachable object become reachable again?
An unreachable object may become reachable again. This can happen when the object’s finalize() method is invoked and the object performs an operation which causes it to become accessible to reachable objects.
210) When is an object subject to garbage collection?
An object is subject to garbage collection when it becomes unreachable to the program in which it is used.
211) What method must be implemented by all threads?
All tasks must implement the run() method, whether they are a subclass of Thread or implement the Runnable interface.
212) What methods are used to get and set the text label displayed by a Button object?
getLabel() and setLabel()
213) Which Component subclass is used for drawing and painting?
Canvas
214).
215) What are the two basic ways in which classes that can be run as threads may be defined?
A thread class may be declared as a subclass of Thread, or it may implement the Runnable interface.
216).
217).
218) What happens when you add a double value to a String?
The result is a String object.
219) What is the List interface?
The List interface provides support for ordered collections of objects.
Please refer to below link for more java interview questions
Here’s another post with 200 interview questions on Java.
Below URL also have good details:
|
http://javabeat.net/java-interview-questions/
|
CC-MAIN-2017-04
|
refinedweb
| 2,916
| 67.86
|
Threads have been around for some time, but few programmers have actually worked with them. There is even some debate over whether or not the average programmer can use threads effectively. In Java, working with threads can be easy and productive. In fact, threads provide the only way to effectively handle a number of tasks. So it's important that you become familiar with threads early in your exploration of Java.
Threads are integral to the way Java works. We've already seen that an applet's paint() method isn't called by the applet itself, but by another thread within the interpreter. At any given time, there may be many such background threads, performing activities in parallel with your application. In fact, it's easy to get a half dozen or more threads running in an applet without even trying, simply by requesting images, updating the screen, playing audio, and so on. But these things happen behind the scenes; you don't normally have to worry about them. In this chapter, we'll talk about writing applications that create and use their own threads explicitly. class interface. Runnable defines a single, general-purpose method:
public interface Runnable { abstract public void run(); }
Every thread begins its life by executing a run() method in a particular object. run() is a rather mundane method that can hold an arbitrary body of code. It is public,.
A newly born Thread remains idle until we give it a figurative slap on the bottom by calling its start() method. The thread then wakes up and proceeds to execute the run() method of its target object. start() can be called only once in the lifetime of a Thread. Once a thread starts, it continues running until the target object's run() method completes, or we call the thread's stop() method to kill the thread permanently. A little later, we will look at some other methods you can use to control the thread's progress while it is running.
Now let's look at an example. The following class, Animation, implements a run() method to drive its drawing loop:
class Animation implements Runnable { ... public void run() { while ( true ) { // Draw Frames ... repaint(); } } }
To use it, we create a Thread object with an instance of Animation as its target object, and invoke its start() method. We can perform these steps explicitly, as in the following:
Animation happy = new Animation("Mr. Happy"); Thread myThread = new Thread( happy ); myThread.start(); ...
Here we have created an instance of our Animation class and passed it as the argument to the constructor for myThread. When we call the start() method, myThread begins to execute Animation's run() method. Let the show begin!
The above situation is not terribly object oriented. More often, we want an object to handle its own thread, as shown in Figure 6-1.
Figure 6-1 depicts a Runnable object that creates and starts its own Thread. We can have our Animation class perform these actions in its constructor:
class Animation implements Runnable { Thread myThread; Animation (String name) { myThread = new Thread( this ); myThread.start(); } ...
In this case, the argument we pass to the Thread constructor is this, the current object instance. We keep the Thread reference in the instance variable myThread, in case we want to stop the show, or exercise some other kind of control.
The Runnable interface lets us make an arbitrary object the target of a thread, as we did above. This is the most important, general usage of the Thread class. In most situations where you need to use threads, you'll create a class that implements the Runnable interface. I'd be remiss, however, if I didn't show you the other technique for creating a thread. Another design option is to make our target class a subclass of a type that is already runnable. The Thread class itself implements the Runnable interface; it has its own run() method we can override to make it do something useful:
class Animation extends Thread { ... public void run() { while (true ) { // Draw Frames ... repaint(); } } }
The skeleton of our Animation class above looks much the same as before, except that our class is now a kind of Thread. To go along with this scheme, the default (empty) constructor of the Thread class makes itself the default target. That is, by default, the Thread executes its own run() method when we call the start() method, as shown in Figure 6-2. Note that our subclass must override the run() method in the Thread class because Thread simply defines an empty run() method.
Now we create an instance of Animation and call its start() method:
Animation bouncy = new Animation("Bouncy"); bouncy.start();
Alternatively, we can have the Animation object start itself when it is created, as before:
class Animation extends Thread { Animation (String name) { start(); } ...
Here our Animation object just calls its own start() method when it is created.
Subclassing Thread probably seems like a convenient way to bundle a Thread and its target run() method. However, as always, you should let good object-oriented design dictate how you structure your classes. In most cases, a specific run() method is probably closely related to the functionality of a particular class in your application, so you should implement run() in that class. This technique has the added advantage of allowing run() to access any private variables and methods it might need in the class.
If you subclass Thread to implement a thread, you are saying you need a new type of object that is a kind of Thread. While there is something unnaturally satisfying about making an object primarily concerned with performing a single task (like animation), the actual situations where you'll want to create a subclass of Thread should be rather rare. If you find you're subclassing Thread left and right, you may want to examine whether you are falling into the design trap of making objects that are simply glorified functions.
We have seen the start() method used to bring a newly created Thread to life. Three other methods let us control a Thread's execution: stop(), suspend(), and resume(). None of these methods take any arguments; they all operate on the current thread object. The stop() method complements start(); it destroys the thread. start() and stop() can be called only once in the life of a Thread. By contrast, the suspend() and resume() methods can be used to arbitrarily pause and then restart the execution of a Thread.
Often, for simple tasks, it is easy enough to throw away a thread when we want to stop it and simply create a new one when want to proceed again. suspend() and resume() can be used in situations where the Thread's setup is very expensive. For example, if creating the thread involves opening a socket and setting up some elaborate communication, it probably makes more sense to use suspend() and resume() with this thread.
Another common need is to put a thread to sleep for some period of time. Thread.sleep() is a static method of the Thread class that causes the currently executing thread to delay for a specified number of milliseconds:
try { Thread.sleep ( 1000 ); } catch ( InterruptedException e ) { }
Thread.sleep() throws an InterruptedException if it is interrupted by another Thread.[1] When a thread is asleep, or otherwise blocked on input of some kind, it doesn't consume CPU time or compete with other threads for processing. We'll talk more about thread priority and scheduling later.
[1] The Thread class contains an interrupt() method to allow one thread to interrupt another thread, but this functionality is not implemented in Java 1.0.
A Thread continues to execute until one of the following things happens:
So what happens if the run() method for a thread never terminates, and the application that started the thread never calls its stop() method? The answer is that the thread lives on, even after the application that created it has finished. This means we have to be aware of how our threads eventually terminate, or an application can end up leaving orphaned threads that unnecessarily consume resources..
Here's a devilish example of using daemon threads:
class Devil extends Thread { Devil() { setDaemon( true ); start(); } public void run() { // Perform evil tasks ... } }
In the above example, the Devil thread sets its daemon status when it is created. If any Devil threads remain when our application is otherwise complete, Java kills them for us. We don't have to worry about cleaning them up.
Daemon threads are primarily useful in standalone Java applications and in the implementation of the Java system itself, but not in applets. Since an applet runs inside of another Java application, any daemon threads it creates will continue to live until the controlling application exits--probably not the desired effect.
App.
Every thread has a life of its own. Normally, a thread goes about its business without any regard for what other threads in the application are doing. Threads may be time-sliced, which means they can run in arbitrary spurts and bursts as directed by the operating system. On a multiprocessor system, it is even possible for many different threads to be running simultaneously on different CPUs. This section is about coordinating the activities of two or more threads, so they can work together and not collide in their use of the same address space.
Java provides a few simple structures for synchronizing the activities of threads. They are all based on the concept of monitors, a widely used synchronization scheme developed by C.A.R. Hoare. You don't have to know the details about how monitors work to be able to use them, but it may help you to have a picture in mind.
A monitor is essentially a lock. The lock is attached to a resource that many threads may need to access, but that should be accessed by only one thread at a time. It's not unlike a public restroom at a gas station. If the resource is not being used, the thread can acquire the lock and access the resource. By the same token, if the restroom is unlocked, you can enter and lock the door. When the thread is done, it relinquishes the lock, just as you unlock the door and leave it open for the next person. However, if another thread already has the lock for the resource, all other threads have to wait until the current thread finishes and releases the lock, just as if the restroom is locked when you arrive, you have to wait until the current occupant is done and unlocks the door.
Fortunately, Java makes the process of synchronizing access to resources quite easy. The language handles setting up and acquiring locks; all you have to do is specify which resources require locks.
The most common need for synchronization among threads in Java is to serialize their access to some resource, namely an object. In other words, synchronization makes sure only one thread at a time can perform certain activities that manipulate an object. In Java, every object has a lock associated with it. To be more specific, every class and every instance of a class has its own lock. The synchronized keyword marks places where a thread must acquire the lock before proceeding.
For example, say we implemented a SpeechSynthesizer class that contains a say() method. We don't want multiple threads calling say() at the same time or we wouldn't be able to understand anything being said. So we mark the say() method as synchronized, which means that a thread has to acquire the lock on the SpeechSynthesizer object before it can speak:
class SpeechSynthesizer { synchronized void say( String words ) { // Speak } }
Because say() is an instance method, a thread has to acquire the lock on the particular SpeechSynthesizer instance it is using before it can invoke the say() method. When say() has completed, it gives up the lock, which allows the next waiting thread to acquire the lock and run the method. Note that it doesn't matter whether the thread is owned by the SpeechSynthesizer itself or some other object; every thread has to acquire the same lock, that of the SpeechSynthesizer instance. If say() were a class (static) method instead of an instance method, we could still mark it as synchronized. But in this case as there is no instance object involved, the lock would be on the class object itself.
Often, you want to synchronize multiple methods of the same class, so that only one of the methods modifies or examines parts of the class at a time. All static synchronized methods in a class use the same class object lock. By the same token, all instance methods in a class use the same instance object lock. In this way, Java can guarantee that only one of a set of synchronized methods is running at a time. For example, a SpreadSheet class might contain a number of instance variables that represent cell values, as well as some methods that manipulate the cells in a row:
class SpreadSheet { int cellA1, cellA2, cellA3; synchronized int sumRow() { return cellA1 + cellA2 + cellA3; } synchronized void setRow( int a1, int a2, int a3 ) { cellA1 = a1; cellA2 = a2; cellA3 = a3; } ... }
In this example, both methods setRow() and sumRow() access the cell values. You can see that problems might arise if one thread were changing the values of the variables in setRow() at the same moment another thread was reading the values in sumRow(). To prevent this, we have marked both methods as synchronized. When threads are synchronized, only one will be run at a time. If a thread is in the middle of executing setRow() when another thread calls sumRow(), the second thread waits until the first one is done executing setRow() before it gets to run sumRow(). This synchronization allows us to preserve the consistency of the SpreadSheet. And the best part is that all of this locking and waiting is handled by Java; it's transparent to the programmer.
In addition to synchronizing entire methods, the synchronized keyword can be used in a special construct to guard arbitrary blocks of code. In this form it also takes an explicit argument that specifies the object for which it is to acquire a lock:
synchronized ( myObject ) { // Functionality that needs to be synced ... }
The code block above can appear in any method. When it is reached, the thread has to acquire the lock on myObject before proceeding. In this way, we can have methods (or parts of methods) in different classes synchronized the same as methods in the same class.
A synchronized method is, therefore, equivalent to a method with its statements synchronized on the current object. Thus:
synchronized void myMethod () { ... }
is equivalent to:
void myMethod () { synchronized ( this ) { ... } }
With the synchronized keyword, we can serialize the execution of complete methods and blocks of code. The wait() and notify() methods of the Object class extend this capability. Every object in Java is a subclass of Object, so every object inherits these methods. By using wait() and notify(), a thread can give up its hold on a lock at an arbitrary point, and then wait for another thread to give it back before continuing. All of the coordinated activity still happens inside of synchronized blocks, and still only one thread is executing at a given time.
By executing wait() from a synchronized block, a thread gives up its hold on the lock and goes to sleep. A thread might do this if it needs to wait for something to happen in another part of the application, as you'll see shortly. Later, when the necessary event happens, the thread that is running it calls notify() from a block synchronized on the same object. Now the first thread wakes up and begins trying to acquire the lock again.
When the first thread manages to reacquire the lock, it continues from the point it left off. However, the thread that waited may not get the lock immediately (or perhaps ever). It depends on when the second thread eventually releases the lock, and which thread manages to snag it next. Note also, that the first thread won't wake up from the wait() unless another thread calls notify(). There is an overloaded version of wait(), however, that allows us to specify a timeout period. If another thread doesn't call notify() in the specified period, the waiting thread automatically wakes up.
Let's look at a simple scenario to see what's going on. In the following example, we'll assume there are three threads--one waiting to execute each of the three synchronized methods of the MyThing class. We'll call them the waiter, notifier, and related threads, respectively. Here's a code fragment to illustrate:
class MyThing { synchronized void waiterMethod() { // Do some stuff // Now we need to wait for notifier to do something wait(); // Continue where we left off } synchronized void notifierMethod() { // Do some stuff // Notify waiter that we've done it notify(); // Do more things } synchronized void relatedMethod() { // Do some related stuff }
Let's assume waiter gets through the gate first and begins executing waiterMethod(). The two other threads are initially blocked, trying to acquire the lock for the MyThing object. When waiter executes the wait() method, it relinquishes its hold on the lock and goes to sleep. Now there are now two viable threads waiting for the lock. Which thread gets it depends on several factors, including chance and the priorities of the threads. (We'll discuss thread scheduling in the next section).
Let's say that notifier is the next thread to acquire the lock, so it begins to run. waiter continues to sleep and related languishes, waiting for its turn. When notifier executes the call to notify(), Java prods the waiter thread, effectively telling it something has changed. waiter then wakes up and rejoins related in vying for the MyThing lock. Note that it doesn't actually receive the lock; it just changes from saying "leave me alone" to "I want the lock."
At this point, notifier still owns the lock and continues to hold it until it leaves its synchronized method (or perhaps executes a wait() itself). When it finally completes, the other two methods get to fight over the lock. waiter would like to continue executing waiterMethod() from the point it left off, while unrelated, which has been patient, would like to get started. We'll let you choose your own ending for the story.
For each call to notify(), Java wakes up just one method that is asleep in a wait() call. If there are multiple threads waiting, Java picks the first thread on a first-in, first-out basis. The Object class also provides a notifyAll() call to wake up all waiting threads. In most cases, you'll probably want to use notifyAll() rather than notify(). Keep in mind that notify() really means "Hey, something related to this object has changed. The condition you are waiting for may have changed, so check it again." In general, there is no reason to assume only one thread at a time is interested in the change or able to act upon it. Different threads might look upon whatever has changed in different ways.
Often, our waiter thread is waiting for a particular condition to change and we will want to sit in a loop like the following:
... while ( condition != true ) wait(); ...
Other synchronized threads call notify() or notifyAll() when they have modified the environment so that waiter can check the condition again. This is the civilized alternative to polling and sleeping, as you'll see the following example.
Now we'll illustrate a classic interaction between two threads: a Producer and a Consumer. A producer thread creates messages and places them into a queue, while a consumer reads them out and displays them. To be realistic, we'll give the queue a maximum depth. And to make things really interesting, we'll have our consumer thread be lazy and run much slower than the producer. This means that Producer occasionally has to stop and wait for Consumer to catch up. The example below shows the Producer and Consumer classes.
import java.util.Vector; class Producer extends Thread { static final int MAXQUEUE = 5; private Vector messages = new Vector(); public void run() { try { while ( true ) { putMessage(); sleep( 1000 ); } } catch( InterruptedException e ) { } } private synchronized void putMessage() throws InterruptedException { while ( messages.size() == MAXQUEUE ) wait(); messages.addElement( new java.util.Date().toString() ); notify(); } // Called by Consumer public synchronized String getMessage() throws InterruptedException { notify(); while ( messages.size() == 0 ) wait(); String message = (String)messages.firstElement(); messages.removeElement( message ); return message; } } class Consumer extends Thread { Producer producer; Consumer(Producer p) { producer = p; } public void run() { try { while ( true ) { String message = producer.getMessage(); System.out.println("Got message: " + message); sleep( 2000 ); } } catch( InterruptedException e ) { } } public static void main(String args[]) { Producer producer = new Producer(); producer.start(); new Consumer( producer ).start(); } }
For convenience, we have included a main() method that runs the complete example in the Consumer class. It creates a Consumer that is tied to a Producer and starts the two classes. You can run the example as follows:
% java Consumer
The output is the time-stamp messages created by the Producer:
Got message: Sun Dec 19 03:35:55 CST 1996 Got message: Sun Dec 19 03:35:56 CST 1996 Got message: Sun Dec 19 03:35:57 CST 1996 ...
The time stamps initially show a spacing of one second, although they appear every two seconds. Our Producer runs faster than our Consumer. Producer would like to generate a new message every second, while Consumer gets around to reading and displaying a message only every two seconds. Can you see how long it will take the message queue to fill up? What will happen when it does?
Let's look at the code. We are using a few new tools here. Producer and Consumer are subclasses of Thread. It would have been a better design decision to have Producer and Consumer implement the Runnable interface, but we took the slightly easier path and subclassed Thread. You should find it fairly simple to use the other technique; you might try it as an exercise.
The Producer and Consumer classes pass messages through an instance of a java.util.Vector object. We haven't discussed the Vector class yet, but you can think of this one as a queue where we add and remove elements in first-in, first-out order. See Chapter 7 for more information about the Vector class.
The important activity is in the synchronized methods: putMessage() and getMessage(). Although one of the methods is used by the Producer thread and the other by the Consumer thread, they both live in the Producer class because they have to be synchronized on the same object to work together. Here they both implicitly use the Producer object's lock. If the queue is empty, the Consumer blocks in a call in the Producer, waiting for another message.
Another design option would implement the getMessage() method in the Consumer class and use a synchronized code block to explicitly synchronize on the Producer object. In either case, synchronizing on the Producer is important because it allows us to have multiple Consumer objects that feed on the same Producer.
putMessage()'s job is to add a new message to the queue. It can't do this if the queue is already full, so it first checks the number of elements in messages. If there is room, it stuffs in another time stamp. If the queue is at its limit however, putMessage() has to wait until there's space. In this situation, putMessage() executes a wait() and relies on the consumer to call notify() to wake it up after a message has been read. Here we have putMessage() testing the condition in a loop. In this simple example, the test probably isn't necessary; we could assume that when putMessage() wakes up, there is a free spot. However, this test is another example of good programming practice. Before it finishes, putMessage() calls notify() itself to prod any Consumer that might be waiting on an empty queue.
getMessage() retrieves a message for the Consumer. It enters a loop like the Producer's, waiting for the queue to have at least one element before proceeding. If the queue is empty, it executes a wait() and expects the producer to call notify() when more items are available. Notice that getMessage() makes its own unconditional call to notify(). This is a somewhat lazy way of keeping the Producer on its toes, so that the queue should generally be full. Alternatively, getMessage() might test to see if the queue had fallen below a low water mark before waking up the producer.
Now let's add another Consumer to the scenario, just to make things really interesting. Most of the necessary changes are in the Consumer class; the example below shows the code for the modified class.
class Consumer extends Thread { Producer producer; String name; Consumer(String name, Producer producer) { this.producer = producer; this.name = name; } public void run() { try { while ( true ) { String message = producer.getMessage(); System.out.println(name + " got message: " + message); sleep( 2000 ); } } catch( InterruptedException e ) { } } public static void main(String args[]) { Producer producer = new Producer(); producer.start(); // Start two this time new Consumer( "One", producer ).start(); new Consumer( "Two", producer ).start(); } }
The Consumer constructor now takes a string name, to identify each consumer. The run() method uses this name in the call to println() to identify which consumer received the message.
The only modification to make in the Producer code is to change the call to notify() in putMessage() to a call to notifyAll(). Now, instead of the consumer and producer playing tag with the queue, we can have many players waiting on the condition of the queue to change. We might have a number of consumers waiting for a message, or we might have the producer waiting for a consumer to take a message. Whenever the condition of the queue changes, we prod all of the waiting methods to reevaluate the situation by calling notifyAll(). Note, however, that we don't need to change the call to notify() in getMessage(). If a Consumer thread is waiting for a message to appear in the queue, it's not possible for the Producer to be simultaneously waiting because the queue is full.
Here is some sample output when there are two consumers running, as in the main() method shown above:
One got message: Wed Mar 20 20:00:01 CST 1996 Two got message: Wed Mar 20 20:00:02 CST 1996 One got message: Wed Mar 20 20:00:03 CST 1996 Two got message: Wed Mar 20 20:00:04 CST 1996 One got message: Wed Mar 20 20:00:05 CST 1996 Two got message: Wed Mar 20 20:00:06 CST 1996 One got message: Wed Mar 20 20:00:07 CST 1996 Two got message: Wed Mar 20 20:00:08 CST 1996 ...
We see nice, orderly alternation between the two consumers, as a result of the calls to sleep() in the various methods. Interesting things would happen, however, if we were to remove all of the calls to sleep() and let things run at full speed. The threads would compete and their behavior would depend on whether or not the system is using time slicing. On a time-sliced system, there should be a fairly random distribution between the two consumers, while on a non-time-sliced system, a single consumer could monopolize the messages. And since you're probably wondering about time slicing, let's talk about thread priority and scheduling.
Java makes certain guarantees as to how its threads are scheduled. Every thread has a priority value. If, at any time, a thread of a higher priority than the current thread becomes runnable, it preempts the lower priority thread and begins executing. By default, threads at the same priority are scheduled round robin, which means once a thread starts to run, it continues until it does one of the following:
Calls Thread.sleep() or wait()
Waits for a lock in order to run a synchronized method
Blocks, for example, in a xread() or an accept() call
Calls yield()
Completes its target method or is terminated by a stop() call
This situation looks something like what's shown in Figure 6-4.
Java leaves certain aspects of scheduling up to the implementation.[2] The main point here is that some, but not all, implementations of Java use time slicing on threads of the same priority.[3] In a time-sliced system, thread processing is chopped up, so that each thread runs for a short period of time before the context is switched to the next thread, as shown in Figure 6-5.
[3] As of Java Release 1.0, Sun's Java Interpreter for the Windows 95 and Windows NT platforms uses time slicing, as does the Netscape Navigator Java environment. Sun's Java 1.0 for the Solaris UNIX platforms doesn't.
[2] This implementation-dependent aspect of Java isn't a big deal, since it doesn't hurt for an implementation to add time slicing on top of the default round-robin scheduling. It's actually not hard to create a time-slicing effect by simply having a high-priority thread sleeping for a specified time interval. Every time it wakes up, it interrupts a lower-priority thread and causes processing to shift round robin to the next thread.
Higher priority threads still preempt lower priority threads in this scheme. The addition of time slicing mixes up the processing among threads of the same priority; on a multiprocessor machine, threads may even be run simultaneously. Unfortunately, this feature can lead to differences in your application's behavior.
Since Java doesn't guarantee time slicing, you shouldn't write code that relies on this type of scheduling; any software you write needs to function under the default round-robin scheduling. But if you're wondering what your particular flavor of Java does, try the following experiment:
class Thready { public static void main( String args [] ) { new MyThread("Foo").start(); new MyThread("Bar").start(); } } class MyThread extends Thread { String message; MyThread ( String message ) { this.message = message; } public void run() { while ( true ) System.out.println( message ); } }
The Thready class starts up two MyThread objects. Thready is a thread that goes into a hard loop (very bad form) and prints its message. Since we don't specify a priority for either thread, they both inherit the priority of their creator, so they have the same priority. When you run this example, you will see how your Java implementation does it scheduling. Under a round-robin scheme, only "Foo" should be printed; "Bar" never appears. In a time-slicing implementation, you should occasionally see the "Foo" and "Bar" messages alternate.
Now let's change the priority of the second thread:
class Thready { public static void main( String args [] ) { new MyThread("Foo").start(); Thread bar = new MyThread("Bar"); bar.setPriority( Thread.NORM_PRIORITY + 1 ); bar.start(); } }
As you might expect, this changes how our example behaves. Now you may see a few "Foo" messages, but "Bar" should quickly take over and not relinquish control, regardless of the scheduling policy.
Here we have used the setPriority() method of the Thread class to adjust our thread's priority. The Thread class defines three standard priority values, as shown in Table 6-1.
If you need to change the priority of a thread, you should use one of these values or a close relative value. But let me warn you against using MAX_PRIORITY or a close relative value; if you elevate many threads to this priority level, priority will quickly become meaningless. A slight increase in priority should be enough for most needs. For example, specifying NORM_PRIORITY + 1 in our example is enough to beat out our other thread.
As I said earlier, whenever a thread sleeps, waits, or blocks on I/O, it gives up its time slot, and another thread is scheduled. So as long as you don't write methods that use hard loops, all threads should get their due. However, a Thread can also give up its time voluntarily with the yield() call. We can change our previous example to include a yield() on each iteration:
class MyThread extends Thread { ... public void run() { while ( true ) { System.out.println( message ); yield(); } } }
Now you should see "Foo" and "Bar" messages alternating one for one. If you have threads that perform very intensive calculations, or otherwise eat a lot of CPU time, you might want to find an appropriate place for them to yield control occasionally. Alternatively, you might want to drop the priority of your intensive thread, so that more important processing can proceed around it.
Return to the O'Reilly Java Homepage
|
http://oreilly.com/catalog/expjava/excerpt/index.html
|
crawl-002
|
refinedweb
| 5,491
| 61.67
|
How do you convert decimal values to their hex equivalent in JavaScript?
Convert a number to a hexadecimal string with:
hexString = yourNumber.toString(16);
and reverse the process with:
yourNumber = parseInt(hexString, 16);
The code below will convert the decimal value d to hex. It also allows you to add padding to the hex result. so 0 will become 00 by default.
function decimalToHex(d, padding) { var hex = Number(d).toString(16); padding = typeof (padding) === "undefined" || padding === null ? padding = 2 : padding; while (hex.length < padding) { hex = "0" + hex; } return hex; }
var number = 3200; var hexString = number.toString(16);
The 16 is the radix and there are 16 values in a hexadecimal number :-)
If you need to handle things like bit fields or 32-bit colors, then you need to deal with signed numbers. The javascript function toString(16) will return a negative hex number which is usually not what you want. This function does some crazy addition to make it a positive number.
function decimalToHexString(number) { if (number < 0) { number = 0xFFFFFFFF + number + 1; } return number.toString(16).toUpperCase(); }
function dec2hex(i) { var result = "0000"; if (i >= 0 && i <= 15) { result = "000" + i.toString(16); } else if (i >= 16 && i <= 255) { result = "00" + i.toString(16); } else if (i >= 256 && i <= 4095) { result = "0" + i.toString(16); } else if (i >= 4096 && i <= 65535) { result = i.toString(16); } return result }
AFAIK comment 57807 is wrong and should be something like: var hex = Number(d).toString(16); instead of var hex = parseInt(d, 16);
function decimalToHex(d, padding) { var hex = Number(d).toString(16); padding = typeof (padding) === "undefined" || padding === null ? padding = 2 : padding; while (hex.length < padding) { hex = "0" + hex; } return hex; }
Without the loop :
function decimalToHex(d) { var hex = Number(d).toString(16); hex = "000000".substr(0, 6 - hex.length) + hex; return hex; } //or "#000000".substr(0, 7 - hex.length) + hex; //or whatever //*Thanks to MSDN
Also isn't it better not to use loop tests that have to be evaluated eg instead of:
for (var i = 0; i < hex.length; i++){}
have
for (var i = 0, var j = hex.length; i < j; i++){}
With padding:
function dec2hex(i) { return (i+0x10000).toString(16).substr(-4).toUpperCase(); }
Constrained/Padded to a set number of characters:
function decimalToHex(decimal, chars) { return (decimal + Math.pow(16, chars)).toString(16).slice(-chars).toUpperCase(); }
If you want to convert a number to a hex representation of an RGBA color value, I've found this to be the most useful combination of several tips from here:
function toHexString(n) { if(n < 0) { n = 0xFFFFFFFF + n + 1; } return "0x" + ("00000000" + n.toString(16).toUpperCase()).substr(-8); }
And if the number is negative?
Here is my version.
function hexdec (hex_string) { hex_string=((hex_string.charAt(1)!='X' && hex_string.charAt(1)!='x')?hex_string='0X'+hex_string : hex_string); hex_string=(hex_string.charAt(2)<8 ? hex_string =hex_string-0x00000000 : hex_string=hex_string-0xFFFFFFFF-1); return parseInt(hex_string, 10); }
function toHex(d) { return ("0"+(Number(d).toString(16))).slice(-2).toUpperCase() }
Combining some of these good ideas for an rgb to hex function (add the # elsewhere for html/css):
function rgb2hex(r,g,b) { if (g !== undefined) return Number(0x1000000 + r*0x10000 + g*0x100 + b).toString(16).substring(1); else return Number(0x1000000 + r[0]*0x10000 + r[1]*0x100 + r[2]).toString(16).substring(1); }
I'm doing conversion to hex string in a pretty large loop, so I tried several techniques in order to find the fastest one. My requirements were to have a fixed-length string as a result, and encode negative values properly (-1 => ff..f).
Simple
.toString(16) didn't work for me since I needed negative values to be properly encoded. The following code is the quickest I've tested so far on 1-2 byte values (note that
symbols defines the number of output symbols you want to get, that is for 4-byte integer it should be equal to 8):
var hex = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f']; function getHexRepresentation(num, symbols) { var result = ''; while (symbols--) { result = hex[num & 0xF] + result; num >>= 4; } return result; }
It performs faster than
.toString(16) on 1-2 byte numbers and slower on larger numbers (when
symbols >= 6), but still should outperform methods that encode negative values properly.
As the accepted answer states, the easiest way to convert from dec to hex is
var hex = dec.toString(16). However, you may prefer to add a string conversion, as it ensures that string representations like
"12".toString(16) work correctly.
// avoids a hard to track down bug by returning `c` instead of `12` (+"12").toString(16);
To reverse the process you may also use the solution below, as it is even shorter.
var dec = +("0x" + hex);
It seems to be slower in Google Chrome and Firefox, but is significantly faster in Opera. See.
For completion, if you want the two's-complement hexadecimal representation of a negative number, you can use the zero-fill-right shift
>>> operator. For instance:
> (-1).toString(16) "-1" > ((-2)>>>0).toString(16) "fffffffe"
There is however one limitation: javascript bitwise operators treat their operands as a sequence of 32 bits, that is, you get the 32-bits two's-complement.
To sum it all up;
function toHex(i, pad) { if (typeof(pad) === 'undefined' || pad === null) { pad = 2; } var strToParse = i.toString(16); while (strToParse.length < pad) { strToParse = "0" + strToParse; } var finalVal = parseInt(strToParse, 16); if ( finalVal < 0 ) { finalVal = 0xFFFFFFFF + finalVal + 1; } return finalVal; }
However, if you don't need to convert it back to an integer at the end (i.e. for colors), then just making sure the values aren't negative should suffice.
The accepted answer did not take into account single digit returned hex codes. This is easily adjusted by:
function numHex(s) { var a = s.toString(16); if( (a.length % 2) > 0 ){ a = "0" + a; } return a; }
and
function strHex(s) { var a = ""; for( var i=0; i<s.length; i++ ){ a = a + numHex( s.charCodeAt(i) ); } return a; }
I believe the above answers have been posted numerous times by others in one form or another. I wrap these in a toHex() function like so:
function toHex(s) { var re = new RegExp( /^\s*(\+|-)?((\d+(\.\d+)?)|(\.\d+))\s*$/ ); if( re.test(s) ){ return '#' + strHex( s.toString() ); } else { return 'A' + strHex( s ); } }
Note that the numeric regular expression came from 10+ Useful JavaScript Regular Expression Functions to improve your web applications efficiency.
Update: After testing this thing several times I found an error (double quotes in the RegExp) so I fixed that. HOWEVER! After quite a bit of testing and having read the post by almaz - I realized I could not get negative numbers to work. Further - I did some reading up on this and since all Javascript numbers are stored as 64 bit words no matter what - I tried modifying the numHex code to get the 64 bit word. But it turns out you can not do that. If you put "3.14159265" AS A NUMBER into a variable - all you will be able to get is the "3" because the fractional portion is only accessible by multiplying the number by ten(IE:10.0) repeatedly. Or to put that another way - the HEX value of 0xf causes the FLOATING POINT value to be translated into an INTEGER before it is ANDed which removes everything behind the period. Rather than taking the value as a whole (ie: 3.14159265) and ANDing the FLOATING POINT value against the 0xf value. So the best thing to do, in this case, is to convert the 3.14159265 into a STRING and then just convert the string. Because of the above, it also makes it easy to convert negative numbers because the minus sign just becomes 0x26 on the front of the value. So what I did was on determining that the variable contains a number - just convert it to a string and convert the string. What this means to everyone is that on the server side you will need to unhex the incoming string and then to determine the incoming information is numeric. You can do that easily by just adding a "#" to the front of numbers and "A" to the front of a character string coming back. See the toHex() function.
Have fun!
For anyone interested, here's a JSFiddle comparing most of the answers given to this question.
And here's the method I ended up going with:
function decToHex(dec) { return (dec + Math.pow(16, 6)).toString(16).substr(-6); }
Also, bear in mind that if you're looking to convert from decimal to hex for use in CSS as a color data type, you might instead prefer to extract the RGB values from the decimal and use rgb().
var c = 4210330; // your color in decimal format var rgb = [(c & 0xff0000) >> 16, (c & 0x00ff00) >> 8, (c & 0x0000ff)]; // assuming you're using jQuery... $("#some-element").css("color", "rgb(" + rgb + ")");
This sets
#some-element's CSS
color property to
rgb(64, 62, 154).
How to convert decimal to hex in JavaScript?
I know this question is old, wasn't able to find a brutally clean/simple Dec to Hex conversion that didn't involve a mess of functions and arrays ... so I had to make this for myself. Posting this to help anyone looking for this, know it would have saved me some time. lol
D = 3678; // Data (decimal) C = 0xF; // Check A = D; // Accumulate B = -1; // Base string length S = ''; // Source 'string' H = '0x'; // Destination 'string' do{ ++B; A&=C; switch(A){ case 0xA:A='A' break; case 0xB:A='B' break; case 0xC:A='C' break; case 0xD:A='D' break; case 0xE:A='E' break; case 0xF:A='F' break; A=(A); }S+=A; D>>>=0x04; A=D; }while(D) do H+=S[B];while(B--) S = B = A = C = D; // Zero out variables alert( H ); // H: holds hex equivalent
P.S. Wrap it in a function if needed, put the decimal number to be converted in "D" and the hexadecimal equivalent will come out at var "H". It should be self explanatory ... would assume runs very fast, you can also add padding if needed it will automatically adjust to the size of the decimal number you input. basic 32bit int being the max No extra padding
You can check the following JsFiddle example or Stackoverflow JS example code as well.
'use strict'; var convertBase = function () { function convertBase(baseFrom, baseTo) { return function (num) { return parseInt(num, baseFrom).toString(baseTo); }; } // decimal to hexadecimal convertBase.dec2hex = convertBase(10, 16); return convertBase; }(); alert(convertBase.dec2hex('42')); // '2a'
Similar Questions
|
http://ebanshi.cc/questions/455/how-to-convert-decimal-to-hex-in-javascript
|
CC-MAIN-2017-22
|
refinedweb
| 1,767
| 64.71
|
This is your resource to discuss support topics with your peers, and learn from each other.
04-18-2013 05:37 AM - edited 04-18-2013 05:57 AM
Hey...
what to do, if i want to have different BAR files for z10 and q10 for my app?
the way i thougt was to add a q10 release for my app and only tick q10 at "devices". but can i have the same version for both BARs?! How is the right way to do that? i know it would be better to have one BAR for both, but this is the way i wanna do it right now...
thank you!!
Solved! Go to Solution.
04-19-2013 04:11 AM
anyone?
04-19-2013 09:16 AM
Just give then different ids e.g.
com.jplust.myappZ10
com.jplust.myappQ10
Then, as you say, only mark Z10 and Q10 for the respective device releases
You'll need two source trees with one letter in the config files different
Seems a bit pointless really
04-19-2013 10:26 AM
thank you peardox!
following scenario also seems to work, but i will have different builds in appworld for both devices:
1) same ID for Q10 / Z10 in config.xml
2) sign z10-bar with e.g. --buildID 1.0.0.0 -> tick z10 in vendorportal
3) sign q10-bar with e.g. --buildID 1.0.0.1 -> tick q10 in vendorportal
any disadvantages for this scenario except different builds in appworld? thank you...
04-19-2013 10:33 AM
Nope, nothing wrong with it
AppWorld have actually renamed some of my stuff for me when I've put the same thing up for BB10 + Playbook to avoid namespace clashes
i.e. what you're doing is fine as far as AppWorld are concerned but the reverse is not true - you can't have MyMegaApp on PB + BB10, they have to have different names (Z10 + Q10 don't)
04-19-2013 10:36 AM
great... thanks for your time!
|
https://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/Different-BARs-for-z10-q10/m-p/2319087
|
CC-MAIN-2016-40
|
refinedweb
| 337
| 80.31
|
Writing tests and test cases is important in software engineering, however, it seems that a lot of people, especially new-baked developers are afraid of tests or simply evade to put them into practice.
Need for unit testing
Testing allows you to see if each small piece of code produces the desired outcome and if it works as intended. These small pieces of code are called units, and they can't get any simpler than that. It's because of their small and isolated nature that makes it so easy to fix a problem if it arises.
Most bugs, holes, and oversights in programs are noticed during runtime. Unit testing allows for the automation of the testing process and helps you pinpoint the offending code which would usually be hidden behind a complex architecture, posing a seemingly much greater problem.
JUnit and TestNG are the most widespread unit testing frameworks these days, and we will be committing our time to the former in this article.
Standard unit testing practices
There are some standards to follow while writing unit tests.
Unit test location - Typically, we put Java classes into src/main/java while we put test classes in src/test/java
Naming Conventions - It's standard practice to name test classes the same as the classes being tested with the addition of "Test" at the end. Ex: "MainController" and "MainControllerTest". Maven took advantage of this convention and includes all classes with this suffix in its test scope.
Unit test information - Providing meaningful and useful messages during tests is crucial. If you're working in a team, allowing others to understand your tests is highly commendable.
Method Naming Convention - When writing test methods, there are multiple approaches:
- should[action]
Example: mailShouldBeSent or cartShouldGetCleared
- should[consequence]when[action]
Example: shouldBanWhenEULAIsBroken
- Given[input]When[action]Then[consequence]
Given_UserIsLoggedIn_When_SessionIsExpired_Then_LogoutUser
Annotations
JUnit has introduced us to some new annotations as well:
Note that JUnit 5 introduces
@BeforeEach and
@BeforeAll instead of
@Before and
@BeforeClass respectively, as well as
@AfterEach and
@AfterAll. These annotation names are more indicative and cause less confusion.
Using JUnit
JUnit tests are simply segregated methods in a test class. To define a method to be a test method, we annotate it with the
@Test annotation.
By using the
assert method, you can check a result and compare it to an expected result. Generally, these methods are referred to as
asserts or
assert statements, and you will find these terms used interchangeably in literature.
Examples:
Let's create a class with a simple method that adds two numbers:
public class Main {
public static int addNumbers(int x, int y) {
int result = x + y;
return result;
}
}
To test if this code runs successfully and as expected, we make a new test class, paying attention to conventions above:
public class MainTest {
@Test
public void shouldReturnTwenty() {
Main testMain = new Main();
assertEquals("15 + 5 must return 20", 20, testMain.addNumbers(5, 15));
}
}
Our
assertEquals method accepts a message if the test fails, an expected int, and the actual result. In our case, we are expecting the method to return 20, so this check runs successfully.
Process finished with exit code 0
On the other hand, if we modify the test like so:
public class MainTest {
@Test
public void shouldReturnTwenty() {
Main testMain = new Main();
assertEquals("15 + 5 must return 20", 25, testMain.addNumbers(5, 15));
}
}
Our test fails and we are greeted with our message:
java.lang.AssertionError: 15 + 5 must return 20
Expected :25
Actual :20
<Click to see difference>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at test.MainTest.shouldReturnTwenty(MainTest.java:15)
...
Process finished with exit code -1
JUnit Test Fixture
Let's simply lay out a standard test fixture, and how a unit looks like:
public class MainTest {
private static Main testMain;
@BeforeAll
public static void setupClass() {
testMain = new Main();
}
@BeforeEach
public void setupForEachMethod() {
...code
}
@Test
public void shouldReturnTwenty() {
assertEquals("15+5 must return 20", 20, testMain.addNumbers(5, 15));
}
@Test
public void shouldReturnThirty() {
assertEquals("25+5 must return 30", 30, testMain.addNumbers(25, 5));
}
@Test
public void shouldReturnSomething() {
...code
}
@AfterEach
public void afterEachTest() {
...code
}
@AfterAll
public static void afterAllTests() {
...code
}
}
It's arguable which methods should be tested. Some argue that all code should be tested in such classes, while some argue that's it's unnecessary for some methods, but everybody agrees that you should write tests for critical parts of code, especially if it's a newly developed feature.
In our example, we tested our method with two tests -
shouldReturnTwenty and
shouldReturnThirty.
Conclusion
In this article, we went over the need for testing, standard practices, naming conventions, and a test fixture, got familiar with annotations and methods from the JUnit framework and wrote our own test.
|
https://able.bio/DavidLandup/how-to-write-tests-in-java-junit--000kx1c
|
CC-MAIN-2020-16
|
refinedweb
| 802
| 51.78
|
was going to remove the replication memberships for the one-way connections - and then remove the members completely. Once that is done I am thinking i will need to remove all of the one-way replicated data from the current 2ndary member server.
-------------
That looks right - at least that's how we have done it in the past. I will add, when ive done re-establishment in the past its been with 2003 FRS DFS... 2008 should act the same but the normal ' im not responsible' disclaimer :) Ensure you have a backup of the data before doing anything :)
This may help u
AtB
Your question, your audience. Choose who sees your identity—and your question—with question security.
The above suggestion of setting up replication with the server locally (temporarily on a local IP), allow replication to occur, and then move offsite is a pretty good one.
I'd use robocopy with appropriate switches (/copyall perhaps) to preserve your permissions if you want to do a faster manual copy. If your target volume is on a SAS external enclosure or similar, this might be the fastest way to accomplish your goals (without saturating your network for the long copy). You can move the enclosure to the source server, copy files with robocopy, and then move the enclosure to the destination server and join replication group.
It should use hash values to determine if replication is necessary and discover the files already on the server and not replicate again (although it may true up metadata, like ntfs permissions/etc as part of the replication).
Do you guys see any issue in doing a two-way replication for our "backup" site?
Thx
In general i hate one way rep... i always go for 2 way unless the business need specifically dictates the need for the other.
Also keep in mind that DFS has issues with the file locking mechanism that office apps employ, meaning you could run into conflicts if you have a multi person file edit situation.
Will i have any issues with deleteing replication groups during the day? Most of them are disconected or disabled by now. I just want to make sure that our Namsespaces will remain intact because all of our employees have Mapped network drives that point to the namespaces.
I will only have 2 servers to replicate to - does anyone know of any issue i should have by removing the existing replication groups and recreating them during the day? I just want to make sure we don't have folders/files getting deleted again.
Thanks!
Start from fresh, create the DFS root -> targets etc. Add the files into the local one (the primary if you will) and make sure that referrals are disabled to the offsite one until files have replicated. it does mean that access will be slower as remote people access the local office copy. Once replication is complete and the logs indicate that you're in sync you can enable referals on both.
Just to check, you are using, or will be suing a DOMAIN DFS root yes ? rather than a standalone, so its \\fulldomainname\root\fold
Just wanted to check as the experience is much better ;) if so, then you can re-create using the same names, and drive mappings will be maintained.
Is this correct?
The best way (least hassle) is to have a 2 way mirror meaning HQ and Branch always stay in sync. If you want to be doubly sure of consistency for DR reasons you could disable referals to server2 and people would NEVER save files onto it... but the downside is the WAN would be used a lot as everyone would go to server1.
At my place of work we have 3 servers (SITE 1 , 2 , 3) , each has a file server and we have a full mesh replication system ( we use MPLS wan , with a virtual full mesh), meaning if any server goes down nobody notices, we can repair and bring back, or slot in a new server (turning referals off so nobody sees the empty share), wait for rebuild and then switch on referals.
To summarise, with 1 way referrals your right, its a nightmare for ensuring consistency.
Couple that with Shadow Copying on the DFS drives and you can be sure to get back any deleted data too, if people get silly.
Does this seem correct - Also i am thinking this would be safe to do now as well. By removing the REplication groups i wont remove any namespaces or anything will i?
I just want to make sure that because DFS sees the 2ndary member as empty - it wont remove all of the files from the primary.
SERVER 1 = all the data
SERVER 2 = empty target
etc
etc
then the replication will be fine as it will replicate the data to all the other targets. The issues start when you have multiple shares with data on... thats a mess.
|
https://www.experts-exchange.com/questions/26427307/What-is-the-fastest-way-to-replicate-DFS-data-to-new-member-server.html
|
CC-MAIN-2018-22
|
refinedweb
| 826
| 67.28
|
Django 1.0 Website Development — Save 50%
Build powerful web applications, quickly and cleanly, with the Django application framework
An important aspect of socializing in our application is letting users to maintain their friend lists and browse through the bookmarks of their friends. So, in this section we will build a data model to maintain user relationships, and then program two views to enable users to manage their friends and browse their friends' bookmarks.
Creating the friendship data model
Let's start with the data model for the friends feature. When a user adds another user as a friend, we need to maintain both users in one object. Therefore, the Friendship data model will consist of two references to the User objects involved in the friendship. Create this model by opening the bookmarks/models.py file and inserting the following code in it:
class Friendship(models.Model):
from_friend = models.ForeignKey(
User, related_name='friend_set'
)
to_friend = models.ForeignKey(
User, related_name='to_friend_set'
)
def __unicode__(self):
return u'%s, %s' % (
self.from_friend.username,
self.to_friend.username
)
class Meta:
unique_together = (('to_friend', 'from_friend'), )
The Friendship data model starts with defining two fields that are User objects: from_friend and to_friend. from_friend is the user who added to_friend as a friend. As you can see, we passed a keyword argument called related_name to both the fields. The reason for this is that both fields are foreign keys that refer back to the User data model. This will cause Django to try to create two attributes called friendship_set in each User object, which would result in a name conflict. To avoid this problem, we provide a specific name for each attribute. Consequently, each User object will contain two new attributes: user.friend_set, which contains the friends of this user and user.to_friend_set, which contains the users who added this user as a friend. Throughout this article, we will only use the friend_set attribute, but the other one is there in case you need it .
Next, we defined a __unicode__ method in our data model. This method is useful for debugging.
Finally, we defined a class called Meta. This class may be used to specify various options related to the data model. Some of the commonly used options are:
- db_table: This is the name of the table to use for the model. This is useful when the table name generated by Django is a reserved keyword in SQL, or when you want to avoid conflicts if a table with the same name already exists in the database.
- ordering: This is a list of field names. It declares how objects are ordered when retrieving a list of objects. A column name may be preceded by a minus sign to change the sorting order from ascending to descending.
- permissions: This lets you declare custom permissions for the data model in addition to add, change, and delete permissions. Permissions should be a list of two-tuples, where each two-tuple should consist of a permission codename and a human-readable name for that permission. For example, you can define a new permission for listing friend bookmarks by using the following Meta class:
class Meta:
permissions = (
('can_list_friend_bookmarks',
'Can list friend bookmarks'),
)
- unique_together: A list of field names that must be unique together.
We used the unique_together option here to ensure that a Friendship object is added only once for a particular relationship. There cannot be two Friendship objects with equal to_friend and from_friend fields. This is equivalent to the following SQL declaration:
UNIQUE ("from_friend", "to_friend")
If you check the SQL generated by Django for this model, you will find something similar to this in the code.
After entering the data model code into the bookmarks/models.py file, run the following command to create its corresponding table in the database:
$ python manage.py syncdb
Now let's experiment with the new model and see how to store and retrieve relations of friendship. Run the interactive console using the following command:
$ python manage.py shell
Next, retrieve some User objects and build relationships between them (but make sure that you have at least three users in the database):
>>> from bookmarks.models import *
>>> from django.contrib.auth.models import User
>>> user1 = User.objects.get(id=1)
>>> user2 = User.objects.get(id=2)
>>> user3 = User.objects.get(id=3)
>>> friendship1 = Friendship(from_friend=user1, to_friend=user2)
>>> friendship1.save()
>>> friendship2 = Friendship(from_friend=user1, to_friend=user3)
>>> friendship2.save()
Now, user2 and user3 are both friends of user1. To retrieve the list of Friendship objects associated with user1, use:
>>> user1.friend_set.all()
[<Friendship: user1, user2>, <Friendship: user1, user3>]
(The actual usernames in output were replaced with user1, user2, and user3 for clarity.)
As you may have already noticed, the attribute is named friend_set because we called it so using the related_name option when we created the Friendship model.
Next, let's see one way to retrieve the User objects of user1's friends:
>>> [friendship.to_friend for friendship in
user1.friend_set.all()]
[<User: user2>, <User: user3>]
The last line of code uses a Python feature called "list" comprehension to build the list of User objects. This feature allows us to build a list by iterating through another list. Here, we built the User list by iterating over a list of Friendship objects. If this syntax looks unfamiliar, please refer to the List Comprehension section in the Python tutorial.
Notice that user1 has user2 as a friend, but the opposite is not true.
>>> user2.friend_set.all()
[]
In other words, the Friendship model works only in one direction. To add user1 as a friend of user2, we need to construct another Friendship object.
>>> friendship3 = Friendship(from_friend=user2, to_friend=user1)
>>> friendship3.save()
>>> user2.friend_set.all()
[<Friendship: user2, user1>]
By reversing the arguments passed to the Friendship constructor, we built a relationship in the other way. Now user1 is a friend of user2 and vice-versa. Experiment more with the model to make sure that you understand how it works. Once you feel comfortable with it, move to the next section, where we will write views to utilize the data model. Things will only get more exciting from now on!
Writing views to manage friends
Now that we are able to store and retrieve user relationships, it's time to create views for these features. In this section we will build two views: one for adding a friend, and another for listing friends and their bookmarks.
We will use the following URL scheme for friend-related views:
- If the view is for managing friends (adding a friend, removing a friend, and so on), its URL should start with /friend/. For example, the URL of the view that adds a friend will be /friend/add/.
- If the view is for viewing friends and their bookmarks, its URL should start with /friends/. For example, /friends/username/ will be used to display the friends of username.
This convention is necessary to avoid conflicts. If we use the prefix /friend/ for all views, what happens if a user registers the username add? The Friends page for this user will be /friend/add/, just like the view to add a friend. The first URL mapping in the URL table will always be used, and the second will become inaccessible, which is obviously a bug.
Now that we have a URL scheme in mind, let's start with writing the friends list view.
The friends list view
This view will receive a username in the URL, and will display this user's friends and their bookmarks. To create the view, open the bookmarks/views.py file and add the following code to it:
def friends_page(request, username):
user = get_object_or_404(User, username=username)
friends = [friendship.to_friend
for friendship in user.friend_set.all()]
friend_bookmarks = Bookmark.objects.filter(
user__in=friends
).order_by('-id')
variables = RequestContext(request, {
'username': username,
'friends': friends,
'bookmarks': friend_bookmarks[:10],
'show_tags': True,
'show_user': True
})
return render_to_response('friends_page.html', variables)
This view is pretty simple. It receives a username and operates upon it as follows:
- The User object that corresponds to the username is retrieved using the shortcut method get_object_or_404.
- The friends of this user are retrieved using the list comprehension syntax mentioned in the previous section.
- After that, the bookmarks of the user's friends are retrieved using the filter method. The user_in keyword argument is passed to filter in order to retrieve all the bookmarks of the user who exists in the friends list. order_by is chained to filter for the purpose of sorting bookmarks by id in a descending order.
- Finally, the variables are put into a RequestContext object and are sent to a template named friends_page.html. We used the index syntax with friend_bookmarks to get only the latest ten bookmarks.
Let's write the view's template next. Create a file called friends_page.html in the templates folder with the following code in it:
{% extends "base.html" %}
{% block title %}Friends for {{ username }}{% endblock %}
{% block head %}Friends for {{ username }}{% endblock %}
{% block content %}
<h2>Friend List</h2>
{% if friends %}
<ul class="friends">
{% for friend in friends %}
<li><a href="/user/{{ friend.username }}/">
{{ friend.username }}</a></li>
{% endfor %}
</ul>
{% else %}
<p>No friends found.</p>
{% endif %}
<h2>Latest Friend Bookmarks</h2>
{% include "bookmark_list.html" %}
{% endblock %}
The template should be self-explanatory; there is nothing new in it. We iterate over the friends list and create a link for each friend. Next, we create a list of friend bookmarks by including the bookmark_list.html template.
Finally, we will add a URL entry for the view. Open the urls.py file and insert the following mapping into the urlpatterns list:
urlpatterns = patterns('',
[...]
# Friends
(r'^friends/(w+)/$', friends_page),
)
This URL entry captures the username portion in the URL using a regular expression, exactly the way we did in the user_page view.
Although we haven't created a view for adding friends yet, you can still see this view by manually adding some friends to your account (if you haven't done so already). Use the interactive console to make sure that your account has friends, and then start the development server and point your browser to friends/your_username/ (replacing your_username with your actual username). The resulting page will look something similar to the following screenshot:
So, we now have a functional Friends page. It displays a list of friends along with their latest bookmarks. In the next section, we are going to create a view that allows users to add friends to this page.
Creating the add friend view
So far, we have been adding friends using the interactive console. The next step in building the friends feature is offering a way to add friends from within our web application.
The friend_add view works like this: It receives the username of the friend in GET, and creates a Friendship object accordingly. Open the bookmarks/views.py file and add the following view:
@login_required
def friend_add(request):
if 'username' in request.GET:
friend = get_object_or_404(
User, username=request.GET['username']
)
friendship = Friendship(
from_friend=request.user,
to_friend=friend
)
friendship.save()
return HttpResponseRedirect(
'/friends/%s/' % request.user.username
)
else:
raise Http404
Let's go through the view line by line:
- We apply the login_required decorator to the view. Anonymous users must log in before they can add friends.
- We check whether a GET variable called username exists. If it does, we continue with creating a relationship. Otherwise, we raise a 404 page not found error.
- We retrieve the user to be added as a friend using get_object_or_404.
- We create a Friendship object with the currently logged-in user as the from_friend argument, and the requested username as the to_friend argument.
- Finally, we redirect the user to their Friends page.
After creating the view, we will add a URL entry for it. Open the urls.py file and add the highlighted line to it:
urlpatterns = patterns('',
[...]
# Friends
(r'^friends/(w+)/$', friends_page),
(r'^friend/add/$', friend_add),
)
The "add friend" view is now functional. However, there are no links to use it anywhere in our application, so let's add these links. We will modify the user_page view to display a link for adding the current user as a friend, and a link for viewing the user's friends. Of course, we will need to handle special cases; you don't want an "add friend" link when you are viewing your own page, or when you are viewing the page of one of your friends.
Adding these links will be done in the user_page.html template. But before doing so, we need to pass a Boolean flag from the user_page view to the template indicating whether the owner of the user page is a friend of the currently logged-in user or not. So open the bookmarks/views.py file and add the highlighted lines into the user_page view:
def user_page(request, username):
user = get_object_or_404(User, username=username)
query_set = user.bookmark_set.order_by('-id')
paginator = Paginator(query_set, ITEMS_PER_PAGE)
if request.user.is_authenticated():
is_friend = Friendship.objects.filter(
from_friend=request.user,
to_friend=user
)
else:
is_friend = False
try:
page_number = int(request.GET['page'])
except (KeyError, ValueError):
page_number = 1
try:
page = paginator.page(page_number)
except InvalidPage:
raise Http404
bookmarks = page.object_list
variables = RequestContext(request, {
'username': username,
'bookmarks': bookmarks,
'show_tags': True,
'show_edit': username == request.user.username,
'show_paginator': paginator.num_pages > 1,
'has_prev': page.has_previous(),
'has_next': page.has_next(),
'page': page_number,
'pages': paginator.num_pages,
'next_page': page_number + 1,
'prev_page': page_number - 1,
'is_friend': is_friend,
})
return render_to_response('user_page.html', variables)
Next, open the templates/user_page.html file and add the following highlighted lines to it:
[...]
{% block content %}
{% ifequal user.username username %}
<a href="/friends/{{ username }}/">view your friends</a>
{% else %}
{% if is_friend %}
<a href="/friends/{{ user.username }}/">
{{ username }} is a friend of yours</a>
{% else %}
<a href="/friend/add/?username={{ username }}">
add {{ username }} to your friends</a>
{% endif %}
- <a href="/friends/{{ username }}/">
view {{username }}'s friends</a>
{% endifequal %}
{% include "bookmark_list.html" %}
{% endblock %}
Let's go through each conditional branch in the highlighted code:
- We check whether the user is viewing his or her page. This is done using a template tag called ifequal, which takes two variables to compare for equality. If the user is indeed viewing his or her page, we simply display a link to it.
- We check whether the user is viewing the page of one of their friends. If this is the case, we display a link to the current user's Friends page instead of an "add friend" link. Otherwise, we construct an "add friend" link by passing the username as a GET variable.
- We display a link to the Friends page of the user page's owner being viewed.
And that's it. Browse some user pages to see how the links at the top change, depending on your relationship with the owner of the user page. Try to add new friends to see your Friends page grow.
Implementing the friends feature wasn't that hard, was it? You wrote one data model and two views, and the feature became functional. Interestingly, the more Django experience you gain, the more easy and fast its implementation becomes.
Our users are now able to add each other as friends and monitor their friends' bookmarks.
Summary
In this article we developed an important feature for our project. Friend networks are very important in helping users to socialize and share interests together. These features are common in Web 2.0 applications, and now you are able to incorporate them into any Django web site.
If you have read this article you may be interested to view :
- Inviting Friends via Email on Social Web Application with Django 1.0
- Creating an Administration Interface with Django 1.0
- Multiple Templates in Django
About the Author :
Ayman Hourieh.
Books From Packt
Post new comment
|
http://www.packtpub.com/article/building-friend-networks-with-django-1.0
|
CC-MAIN-2014-15
|
refinedweb
| 2,599
| 66.03
|
Related
Tutorial
Getting Started with Webpack +.
This aim of this tutorial is to set up a development environment for a React application bundled using Webpack. While the merits of Webpack and other bundlers are continually compared, this tutorial will let you get started with Webpack and help you decide for yourself.
Dependencies
If you don’t have them already, you will need to install node and npm. These two tools will allow us to manage our dependencies via the
package.json. In the root project folder, run
npm init and answer the setup questions to set up the
package.json (the default answers should work for most users).
In addition, we’ll need the node libraries for React and Webpack plus libraries for transpilation. This can be done with a string of commands while in the root folder:
npm install --save react react-dom create-react-class webpack
npm install --save-dev babel-core babel-loader babel-preset-es2015 babel-preset-react
File Structure
Webpack works by starting with an entry point. From here, Webpack builds a dependency chart of your application for the modules that need to be included for the program to run by looking at the
import and
require statements. For us,
index.js will be that entry point. Make a
dev folder as a place to hold
index.js where we will make our changes. We’ll also need a
src folder for webpack to output the bundle out to.
mkdir dev src && touch dev/index.js
Now we can fill
index.js with the demo code:
var React = require('react'); var ReactDOM = require('react-dom'); var createReactClass = require('create-react-class'); var Index = createReactClass({ render: function() { return ( <div> <p>Webpack and React!</p> </div> ); } }); ReactDOM.render(<Index />, document.getElementById('app'));
index.html
Before we continue Webpacking, we’ll need the actual spot where our bundle will be loaded. This occurs in
index.html which should be made in the root directory and the code is as follows:
<html> <head> <meta charset="utf-8"> <title>React and Webpack</title> </head> <body> <div id="app" /> <script src="src/bundle.js" type="text/javascript"></script> </body> </html>
The file structure should look like this now:
. ├── dev │ └── index.js ├── index.html ├── package.json └── src
webpack.config.js
Now to set up the
webpack.config.js file. This is all the information webpack needs to output a useable bundle.
var webpack = require("webpack"); var path = require("path"); var DEV = path.resolve(__dirname, "dev"); var OUTPUT = path.resolve(__dirname, "src"); var config = { entry: { Index : DEV + "/index.js" }, output: { path: OUTPUT, filename: "bundle.js", }, module: { loaders: [{ include: DEV, test: /\.js$/, exclude: /node_modules/, loader: "babel-loader", query: { presets: ['es2015', 'react'] } }, ] } }; module.exports = config;
entry tells Webpack where to start, the
loaders tell Webpack how to treat that file extension and what to do with it, and
output gives directions for where and how to write the bundle.
Packing
Now that we have everything in place, we should be able to see our app in action. Run
./node_modules/.bin/webpack --watch and in a few seconds you should see something like this:
➜ demo ./node_modules/.bin/webpack --watch Webpack is watching the files… Hash: 1594ffb6ae2044c83abe Version: webpack 3.7.1 Time: 1477ms Asset Size Chunks Chunk Names bundle.js 863 kB 0 [emitted] [big] Index [15] ./dev/index.js 468 bytes {0} [built] + 33 hidden modules
The
--watch option gives us on-save refresh. When you make a change and save, reloading the
index.html in your browser will automatically reload the new changes.
Also, making an
alias for the compilation command saves at least a few tab completions:
alias output="./node_modules/.bin/webpack"
Another Loader Example
Here is another loader,
file-loader, which can be used to bundle the files with image extensions into a folder called
images. You can read more about
loaders and their configuration here
... module: { loaders: [{ include: DEV, test: /\.js$/, exclude: /node_modules/, loader: "babel-loader", query: { presets: ['es2015', 'react'] } }, { test: /\.(jpe?g|png|gif)$/i, loader:"file-loader", query:{ name:'[name].[ext]', outputPath:'images/' } } ] } ...
|
https://www.digitalocean.com/community/tutorials/react-getting-started-webpack-react
|
CC-MAIN-2020-34
|
refinedweb
| 674
| 58.79
|
Default implementation of map concatenation. More...
#include <CompileTimeKeys.hpp>
Default implementation of map concatenation.
In the following some useful helper traits to modify keys are provided.
IDE Support: To make it easier for the IDE's to deduce the type of the output keys, we always pass the type Enum_ to these traits as a template parameter. It could also be auto-deduced. Furthermore in the default implementation of the traits we add a 'type' typedef that defines default (empty) keys.
Compiler Output: To increase compiler output quality slightly, static_asserts are added in the default implementation of the traits. They will always fail, since the default implementation is not valid. Still compiler outputs are hard to read for CompileTimeMaps.
IDE helper type.
|
https://docs.leggedrobotics.com/local_guidance_doc/structstd__utils_1_1ctk__concatenate.html
|
CC-MAIN-2019-18
|
refinedweb
| 122
| 50.94
|
I have only a few coding tricks to pass on:
1) Instances of
self.exec_code = compile(self.exec_string,'<string>','exec')
are probably better as
self.exec_code = compile(self.exec_string,`self`,'exec')
That way tracebacks from errors during compilation will identify the
specific exec_code object that's in error.
2)
> def p(self) :
> print str(self)
'print' automatically invokes your __str__ method, so the body can be
replaced by a plain
print self
And note that, for the same reason, if x is an exec_code object,
the user gets the same output whether they do
x.p()
or
print x
This makes the .p() method marginal.
3)
> for x in range(0,self.indent_level):
> self.exec_string = self.exec_string + ' '
> self.exec_string = self.exec_string + stmt + '\n'
A) When you want a 'for' loop to execute N times, the one-argument
form of 'range' does the trick. So
for x in range(self.indent_level):
is more idiomatic here.
B) An amazing number of Python programmers don't seem to know (or
remember <grin>) that "string * int" returns string+string+...
(int catenations of "string"). So the whole sequence above can
be done in one statement:
self.exec_string = self.exec_string + ' '*self.indent_level + \
stmt + '\n'
> I would appreciate ... comments ... especially with regard to the
> namespace issues I discussed in the comments for ... execute()
The trick to this is to stare at the table in section 4.1 of the language
(not library) reference manual, until after a week or two it dawns on you
that it means exactly (& in particular, no more than) what it says <wink>.
The other trick is to realize that the rules _imply_ that, in almost all
cases (and in all non-devious cases), the global namespace used by a
chunk of code is the namespace of the module in which that code appears.
This is a good rule, because it prevents modules from stepping on each
other by accident.
So, e.g., suppose we put your module in the file eco.py. Then:
>>> import eco
>>> e = eco.exec_code()
>>> e.append_stmt('global i')
>>> e.append_stmt('i = 45')
>>> print e
<exec_code instance> with indent-level 0 and code:
--------------------------------------------------------------
global i
i = 45
>>> i = 0 # stick "i" in __main__'s global NS
>>> dir() # yup, it's there
['__name__', 'e', 'eco', 'i']
>>> dir(eco) # but "i" is not in eco's global NS
['__name__', 'exec_code']
>>> e.execute()
>>> i # *our* value of i didn't change
0
>>> dir(eco) # but eco suddenly grew one
['__name__', 'exec_code', 'i']
>>> eco.i
45
>>>
Clearer? It might help to point out that, in your
exec self.exec_code
exec's caller is the 'execute' method, not whoever _called_ execute. So
"global n.s. of caller" in table 4.1 refers here to the global NS of
function 'execute', and the global NS of a function is the global NS of
the block _containing_ the function (not of the function's caller).
That's the chain you trace thru in 4.1: it leads from the exec, to
execute, to the class defn, and ends at the module containing the class.
5.5-on-the-technical-and-5.9-for-artistic-impression-ly y'rs - tim,
the tonya harding of python
Tim Peters tim@ksr.com
not speaking for Kendall Square Research Corp
|
http://www.python.org/search/hypermail/python-1994q1/0265.html
|
CC-MAIN-2013-20
|
refinedweb
| 540
| 65.83
|
> Wilkinson Charlie E wrote: > In the top level folder (root) I have a dtml method title template > that contains html code and dtml vars that are (intended to be) > populated by properties of various subfolders as well as a value or > two passed directly. The title template is "called" from dtml > methods in the root and sub-folders using something like: > > <dtml-var "title_block(app_screen = 'Main Screen')"> Chances are, you're losing your namespace. When you call another method in quotes, you tend to lose a lot of information unless you use the cryptic: <dtml-var "title_block(_.None, _, app_screen='Main Screen')"> or: <dtml-var "title_block(REQUEST,_,_.None, app_screen='Main Screen')"> I'm guessing this'll fix it for you. :) Good luck! (Oh, and try not to post HTML to the list . . .) -CJ _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
Re: [Zope] Q: Namespace, Acquisition, and Properties
Christopher J. Kucera Mon, 05 Jun 2000 09:33:20 -0700
- [Zope] Q: Namespace, Acquisition, and Properties Wilkinson Charlie E
- Christopher J. Kucera
|
https://www.mail-archive.com/zope@zope.org/msg01376.html
|
CC-MAIN-2017-04
|
refinedweb
| 179
| 61.77
|
[openchange]server stuff - thoughts on properties
2011-01-04 11:02:28 GMT
Hi, I've been working on getting the server back running for mapistore_v2 branch, and one idea that seems quite useful is to treat all the folder information we need to add as just a set of named properties. So if we need a mailbox GUID and mailbox replica GUID, we could just store those as properties on the mailbox root folder. The backend operation would just be the standard op_getprops. I'm currently working on an implementation that does this for mailbox properties. However I recognise that the properties are a scarce resource (only 16 bit space, 15 bits for the direct ids and 15 bits for the named properties). However we don't need to use the same space as the wire protocol for this. That led to thinking about the best way to handle properties (both direct ID and named properties). It occurs to me that we could perhaps handle all properties using a larger space (perhaps 48 bits for property "number" + 16 bits for property type) which would include a GUID index (for the property namespace) together with either the property number or a name index. So on receipt of the property tag from the client, we'd map that into a server representation. That could make lookup of LID pretty easy, and might be able to be shared across the whole server. Each user would still have a mapping for their 16 bit property ids. Thoughts about the treatment of folder characteristics as generic properties? Thoughts about the best way to manage the properties space?(Continue reading)
|
http://blog.gmane.org/gmane.network.openchange.devel/month=20110101
|
CC-MAIN-2013-48
|
refinedweb
| 275
| 67.69
|
In this section, you will learn how to compute the sum of the squares of the array elements. For this, we have allowed the user to enter 5 numbers, to be stored into the array. Then we have initialized another array in order to store the square of these array elements. The elements of this array is then added and stored into the variable 'sum' that will display the sum of the squares of Array elements.
Here is the code:
import java.util.*; class Compute { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter 5 numbers: "); int num[] = new int[5]; int sq[] = new int[5]; int sum = 0; for (int i = 0; i < num.length; i++) { num[i] = input.nextInt(); sq[i] = num[i] * num[i]; sum += sq[i]; } System.out.println("Sum of the square of numbers: " + sum); } }
Output:
|
http://roseindia.net/tutorial/java/core/squareOfArrayElements.html
|
CC-MAIN-2014-42
|
refinedweb
| 148
| 66.84
|
Translation Guidelines¶
To make Zulip even better for users around the world, the Zulip UI is being translated into a number of major languages, including Spanish, German, Hindi, French, Chinese, Russian, and Japanese, with varying levels of progress. If you speak a language other than English, your help with translating Zulip would be greatly appreciated!
If you’re interested in contributing translations to Zulip, please join #translation in the Zulip development community server, and say hello. And please join the Zulip project on Transifex and ask to join any languages you’d like to contribute to (or add new ones). Transifex’s notification system sometimes fails to notify the maintainers when you ask to join a project, so please send a quick email to zulip-core@googlegroups.com when you request to join the project or add a language so that we can be sure to accept your request to contribute.
Zulip has full support for Unicode, so you can already use your preferred language everywhere in Zulip.
Translation style guides¶
We are building a collection of translation style guides for Zulip, giving guidance on how Zulip should be translated into specific languages (e.g. what word to translate words like “home” to):
A great first step when getting started translating Zulip into a new language is to write a style guide, since it greatly increases the ability of future translators to translate in a way that’s consistent with what your work.”
We have a tool to check for the correct capitalization of the
translatable strings; this tool will not allow the Travis builds to
pass in case of errors. You can use our capitalization checker to
validate your code by running
./tools/check-capitalization. If you
think that you have a case where our capitalization checker tool
wrongly categorizes a string as not capitalized, you can add an
exception in the
tools.lib.capitalization.IGNORED_PHRASES list to
make the tool pass.
Please, stick to these while translating, and feel free to point out any strings that should be improved or fixed.
Translation process¶
The end-to-end process to get the translations working is as follows.
Please note that you don’t need to do this if you’re translating; this is only to describe how the whole process is. If you’re interested in translating, you should check out the translators’ workflow..
Translators translate the strings in Transifex.
The translations are downloaded back into the codebase by a maintainer, using
tools/i18n/sync-translations(which invokes
tx pull, internally).
Translators’ workflow¶
These are the steps you should follow if you want to help to translate Zulip:
- Join us on Zulip and ask for access to the organization, as described at the beginning.
- Make sure you have access to Zulip’s dashboard in Transifex.
- Ask a maintainer to update the strings.
- Translate the strings for your language in Transifex..
First of all, download the updated resource files from Transifex using the
tx pull -a --mode=developer command (it will require some
initial setup). This command will download the
resource files from Transifex and replace your local resource files with
them.
Then, make sure that you have compiled the translation strings using
./manage.py compilemessages.
Django figures out the effective language by going through the following steps:
- It looks for the language code in the url (e.g.
/de/).
- It looks for the
LANGUAGE_SESSION_KEYkey in the current user’s session.
- It looks for the cookie named ‘django_language’. You can set a different name through the
LANGUAGE_COOKIE_NAMEsetting.
- It looks for the
Accept-LanguageHTTP header in the HTTP request. Normally your browser will take care of this.
The easiest way to test translations is through the i18n URLs, e.g., if you
have German translations available, you can access the German version of a
page by going to
/de/path_to_page in your browser.
To test translations using other methods you will need an HTTP client
library like
requests,
cURL or
urllib. Here is some sample code to
test
Accept-Language header using Python and
requests:
import requests headers = {"Accept-Language": "de"} response = requests.get("", headers=headers) print(response.content)
Setting the default language in Zulip¶
Zulip allows you to set the default language through the settings
page, in the ‘Display settings’ section. The URL will be
/#settings/display-settings on your realm.
Organizations can set the default language for new users in their
organization on the
/#organization page.
Translation resource files¶
All the translation magic happens through resource files which hold
the translated text. Backend resource files are located at
static/locale/<lang_code>/LC_MESSAGES/django.po, while frontend
resource files are located at
static/locale/<lang_code>/translations.
|
https://zulip.readthedocs.io/en/stable/translating/translating.html
|
CC-MAIN-2018-51
|
refinedweb
| 776
| 62.68
|
On 11.03.2018 17:29, MG wrote:
>
>
> On 11.03.2018 14:58, Jochen Theodorou wrote:
>> On 10.03.2018 20:33, MG wrote:
>>> Hi Jochen,
>>>
>>> I was not aware that Groovy is so sophisticated in its expression
>>> analysis, that it actually uses intersection types
>>
>> you actually do not have much of a choice. It is an AST-only
>> representation only though.
>
> What I meant was: Since Groovy is for instance still using dynamic call
> site resolution in @CompileStatic mode (see: Minecraft obfuscation
> problem), it might conceivably also fall back to Object & dynamic
> resolution in such cases...
the difference is that it is not supposed to do that in static mode ;)
[...]
>> there is almost no expressions consisting of multiple expression, that
>> we can tell the type of in dynamic mode. Even something simple as 1+1
>> can in theory return a ComplexNumber suddenly.
>
> We already touched on that topic in the past: I still think that allowing
> new Foo()
> or
> Foo myFoo(...)
> to return anything that is not of type Foo is "too flexible", and
> therefore should be disallowed, or fail.
>
> Afaics Intellisense also operates on the assumption that types given are
> honored in dynamic Grooy.
Integer foo(int i) {1}
String foo(String s) {"2"}
def bar (x) {
return foo(x)
}
at the callsite in bar you cannot tell if foo(int) or foo(String) is
supposed to be called. Methods at runtime increase the problem, but it
is not unique to them. And it does not always have to be an override in
the classic sense either
[...]
>>> From the view of my framework code that goes even more so for the
>>> related case of final x = RHS -> final typeof(RHS) x = RHS I
>>> therefore keep going on about - if dynamic Groovy does not pick up
>>> the RHS type for final, I need to keep my current code, or force
>>> framework users to use @CompileStatic on all Table derived classes,
>>> if they want to define table columns in the most elegant and concise
>>> way... :-)
>>
>> for "final x = ..." the exact type of x is in dynamic mode actually
>> totally not relevant. There is no reassignment, so that problem is out
>> here. But if we forget about that, then there is no difference between
>> "final x" and "def x". I know cases where it could make a difference,
>> but they do not exist in Groovy yet. so what exactly is final x
>> supposed to do different than def x besides the reassignment?
>
> class Foo {
> final f0 = new FoorchterlichLongerNome(...) // class field c0 will
> have type Object; when analyzing the class using reflection, field
> cannot be found by looking for fields/properties of type Col
> final FoorchterlichLongerNome f1 = new FoorchterlichLongerNome(...)
> // class field will be of type FoorchterlichLongerNome; this is the
> behavior I would wish for without explicitely being required to give
> FoorchterlichLongerNome , even in the dynamic case, for simple
> expressions (as listed above)
> }
ah, I was talking about local variables, not about fields/properties.
For me that style is more the exception. And once you move the code to
the constructor you do not get inference for the field/property anymore.
So its good only for some very specific cases. And for those to make a
difference in the dynamic mode...
bye Jochen
|
http://mail-archives.apache.org/mod_mbox/groovy-dev/201803.mbox/%3Cd33b8b74-8e54-e10c-9ba3-c65f76e5266a@gmx.org%3E
|
CC-MAIN-2019-18
|
refinedweb
| 541
| 61.06
|
.
Setup SSH and sudo
I started with a blank application, and "capified" it. If you don't have Capistrano installed, install it via RubyGems.
sudo gem install capistrano --no-ri --no-rdoc rails slicehost cd slicehost/ capify .
Open up config/deploy.rb and start configuring it as you would for any other Rails application. Just make sure you have the same "default_run_options" line as I do. Here's what mine looks like:
default_run_options[:pty] = true set :application, "slicehost" set :repository, "git://github.com/mig/cthulhu.git" set :deploy_to, "/home/deploy/#{application}" set :user, "deploy" set :scm, :git role :app, "myslice.com" role :web, "myslice.com" role :db, "myslice.com", :primary => true
I've listed "deploy" as my user. This is important – the only bit of manual configuration we will do is to create this user account.
Now I will ssh into my brand new slice, change my password, and create the "deploy" user:
ssh myslice.com -l root passwd adduser deploy
The "deploy" user should get sudo privileges. To make things easier for this example, don't require a password. While this is fine for my DMZ-like slice, you should not leave the NOPASSWD flag on your user after finishing – you have been warned.
visudo
Add this line to the bottom:
deploy ALL=(ALL) NOPASSWD: ALL
One last thing, add your public key to the deploy user's authorized_keys file:
On your local machine:
scp ~/.ssh/id_rsa.pub deploy@myslice.com: ssh myslice.com -l deploy
On your slice:
mkdir .ssh cat id_rsa.pub >> .ssh/authorized_keys rm id_rsa.pub chmod 600 .ssh/authorized_keys chmod 700 .ssh
Now on to the fun stuff...
Installing the Basics
Open up config/deploy.rb and create a custom namespace for our tasks:
namespace :slicehost do # tasks go here end
I'm going to use Ubuntu's apt-get to install some of the basic necessities. Put these tasks in our :slicehost namespace. I've included tasks for git and sqlite3 here, you might want something different like subversion and postgres. Look at the attached file at the end for more examples.
desc "Update apt-get sources" task :update_apt_get do sudo "apt-get update" end desc "Install Development Tools" task :install_dev_tools do sudo "apt-get install build-essential -y" end desc "Install Git" task :install_git do sudo "apt-get install git-core git-svn -y" end desc "Install SQLite3" task :install_sqlite3 do sudo "apt-get install sqlite3 libsqlite3-ruby -y" end
To run any of these, drop to your shell and issue a cap command:
cap slicehost:update_apt_get
To see a list of available cap commands, including our custom ones:
cap -T
Install the Rails Stack
The next example is only a little more complex, the thing you should note is the use of sudo within the command string itself. Include the && between commands if you need a command to run in a directory other than the default.
Let's install Ruby and Rails:
desc "Install Ruby, Gems, and Rails" task :install_rails_stack do [ "sudo apt-get install ruby ruby1.8-dev irb ri rdoc libopenssl-ruby1.8 -y", "mkdir -p src", "cd src", "wget", "tar xvzf rubygems-1.0.1.tgz", "cd rubygems-1.0.1/ && sudo ruby setup.rb", "sudo ln -s /usr/bin/gem1.8 /usr/bin/gem", "sudo gem install rails --no-ri --no-rdoc" ].each {|cmd| run cmd} end
Just run the new installer task and you should be ready to go:
cap slicehost:install_rails_stack
That was easy! We've got a full Rails stack running on our slice. From here we could go a few different routes. I've been eager to try the new mod_rails Passenger out, so let's set that up!
Apache and Passenger (aka mod_rails)
First, we need to install Apache:
desc "Install Apache" task :install_apache do sudo "apt-get install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 apache2-prefork-dev libapr1-dev -y" end
And now the Passenger install. This part is trickiest, because it requires our input on the remote server. This is where the "default_run_options" setting comes in handy.
desc "Install Passenger" task :install_passenger do run "sudo gem install passenger --no-ri --no-rdoc" input = '' run "sudo passenger-install-apache2-module" do |ch,stream,out| next if out.chomp == input.chomp || out.chomp == '' print out ch.send_data(input = $stdin.gets) if out =~ /enter/i end end
Here's what's happening: the run command is passed a block, the block is telling the run command to step through all output while printing everything to the screen. Any time the words "Enter" or "ENTER" are encountered in the output, the execution waits for our input. Any input is then redirected back into the Passenger installer running on the remote server.
So, we've finished installing all the software we need (for the moment). All that's needed is to configure Apache to use Passenger, and to set up a virtual host for our application.
Apache Configuration
Here is the Apache configuration, taken directly from the output of the Passenger install:
desc "Configure Passenger" task :config_passenger do passenger_config =<<-EOF EOF put passenger_config, "src/passenger" sudo "mv src/passenger /etc/apache2/conf.d/passenger" end desc "Configure VHost" task :config_vhost do vhost_config =<<-EOF <VirtualHost *:80> ServerName blog.pggbee.com DocumentRoot #{deploy_to}/public </VirtualHost> EOF put vhost_config, "src/vhost_config" sudo "mv src/vhost_config /etc/apache2/sites-available/#{application}" sudo "a2ensite #{application}" end
That looks more complicated that it really is. The trick to these tasks is the "put" method – it takes a string and a remote filename and uploads the contents of the string to the file on the remote server.
This allows us to generate the configurations locally, create them on the remote server, and then use sudo to move them into their proper place.
That's it. Our slice is ready for us to deploy as normal. I am not going to cover that here, as it's been covered elsewhere and in much more depth than I can go into here.
Wrapping Up
Once you've created all your custom tasks and verify that they work, it's a good idea to put them all together in one setup method (I use :setup_env in my :slicehost namespace). All I have to do is run the task, sit back, and watch:
cap slicehost:setup_env
Try it out for yourself. Download the entire deploy.rb.txt file, remove the .txt extension and drop it into your project.
|
https://www.viget.com/articles/building-an-environment-from-scratch-with-capistrano-2
|
CC-MAIN-2017-09
|
refinedweb
| 1,075
| 56.55
|
Not logged in
Log in now
Recent Features
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
GNU virtual private Ethernet
Though for most of us checkpoint/restart is the least interesting feature about OpenVZ. We'd just hope for a usable container solution which is both root save and has decent userspace tool support, AND a method to join the container from the host! Most of where LXC is just crap.
Beside that, nothing fancy like container in container support, or anything else, just a way to easily separate services on linux, like solaris zones. The thing that makes me sad is, that imho 95% of the infrastructure is already in place in the linux kernel for basic and secure container. And currently there are only out of tree implementation making it usable, openvz and oracle (solaris) zones, afaik in their kernel.
CRtools 0.1 released
Posted Jul 25, 2012 12:27 UTC (Wed) by Lennie (subscriber, #49641)
[Link]
"A new approach to user namespaces"
"user namespace enhancements for Linux 3.5-rc1"
Wouldn't that help to make it rootsave ? As you called it.
I think the real reason why LXC isn't such a complete solution yet, is because what goes into the kernel has to be maintained for a very long time and LXC will end up "virtualizing" a lot of parts of the kernel, so the developers want to only allow small/understandable changes each time.
It's the reason Linux V-Server, OpenVZ and I believe there was an other ? aren't already part of the kernel. The developers would never allow one big patch to add such functionality.
So every in-kernel API needs to be proposed and tweaked until it is ready and allowed into the kernel.
The theory is that each part will be better and more generalized than the independently developed ideas. If I'm not mistaken, LXC can already do a lot of things OpenVZ can't.
This takes time, but with almost each release it gets closer to 100%.
Posted Jul 25, 2012 12:36 UTC (Wed) by Lennie (subscriber, #49641)
[Link]
You could propose changes or even help develop/improve them.
I would love to see someone add the ability to run lxc-execute on a running container.
Posted Jul 25, 2012 15:17 UTC (Wed) by gebi (subscriber, #59940)
[Link]
Seems the only thing missing is really the join container functionality. But there afaik is the problem with lxc allowing containers inside containers.
Posted Jul 25, 2012 13:40 UTC (Wed) by slashdot (guest, #22014)
[Link]
Assuming no kernel bugs, the Linux security model will guarantee security within the same container, assuming you have an unique uid per service, or use SELinux or similar, so there's no advantage in using containers.
If kernel bugs are present, then using containers or not again makes no difference since exploiting a bug within a container gives you full access to kernel mode and thus root in all containers.
Hardware-level virtualization is a different matter though and can in principle improve security.
Posted Jul 25, 2012 15:12 UTC (Wed) by dskoll (subscriber, #1630)
[Link]
What's the point of "separating services into containers"?
It can make the sysadmin's life easier. For example, we run our CRM tool in a container. When I want to upgrade it, I clone the entire container and run a test upgrade in the isolated environment. Similarly, I can upgrade the Debian release in a test container before doing it for real.
It can be very convenient to have completely isolated user-spaces (and not just for security.)
Posted Jul 25, 2012 15:14 UTC (Wed) by gebi (subscriber, #59940)
[Link]
What's the point of chroot?
In the end it's all about separation.
Be it security or in our case mostly management wise.
Each service inside an openvz instance can painlessly administrated by another admin without special care about stepping on the toes of 20 other administrator for the other services.
Some services are really bad in separation, be it syslog configuration of all the daemons, network configuration for additional ip addresses for the different services or even the reducing of necessary configuration!
As most things are done by policy with one service per container (or even done by e.g puppet).
As you see there are a _whole_ lot of reasons to split up services.
> Assuming no kernel bugs...
Assuming the earth is flat has about the same probability of being right.
> Hardware-level virtualization is a different matter though and can in principle improve security.
openVZ has near zero overhead both in terms of speed and resource usage.
Most of the time it's just one rsyslog process more per service, which is at the edge of rounding errors if you speak about 128GB being the minimal amount of ram in current intel dual cpu servers (16G sticks).
Posted Jul 25, 2012 16:52 UTC (Wed) by drag (subscriber, #31333)
[Link]
It makes root a unprivileged user. This allows you to separate application domains in a much more meaningful way then without containers.
It performs this through the use of namespace isolation. Unique set of namespaces provided by the kernel. Network namespaces, file system, uids, pids, etc.
With LXC and others you can choose your level of isolation also. You can run your browser with read-only file system support or a different home directory then the rest of your applications without having to use different users. Or you can isolate the browser entirely or whatever.
Combine that with SELinux or whatever and you can sandbox applications in a secure manner without having to change their code.
No it can't.
If you are using virtualization to improve security you are doing it wrong.
Virtualization is about lowering administrative overhead and reducing hardware costs, among other things. It's not about improving security. People that say virtualization is for improving security are either trying to sell you on something or they don't really understand how security works.
If you have a security issues with buggy code throwing more code at the problem isn't probably going to help much. This is what virtualization does...
Posted Jul 25, 2012 18:07 UTC (Wed) by jmnovak (subscriber, #48627)
[Link]
Are there better approaches? Quite possibly; but it's certainly easy to implement a VM approach, and can be a major component of a total security policy.
--John N.
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/508038/
|
CC-MAIN-2013-48
|
refinedweb
| 1,101
| 62.88
|
tips on structuring/laying out patches?
hi,
Im starting to get the hang of writing max patches, but as a (traditional) programmer, Im getting very frustrated at the mess of it all.
Is there a book/article that explains some way to keep these things tidy?
I already use sub-patches/abstractions, segmented wires/route wire,send/receive – but not really seem to help much.
(and I seem to spend as much time moving stuff around to make it tidy as writing code)
I think I must be missing something pretty fundamental, as I only need about 5 objects in place before the mess starts… so you can imagine the mess my bigger projects are in ;)
Difficult to say really as it sounds like you are doing the the right thing. I admit to embracing the mess to some degree, and enjoying the process of moving stuff around. But then I didn’t come from a text programming background. It sounds like you are doing the right things though…
I don’t use segmented patch cords at all because I find it to hard to see what’s happening if multiple cords overlay each other.
I do use the Alt-Y command a lot to align objects either vertically or horizontally to help keep things neat.
Other than that, definitely abstractions (and nested abstractions) to keep top level stuff small.
I also color related objects to find them quickly and (less often) color the patch cords
I am develop both traditional text based languages and Max. And I agree that the benefits of the patching approach (speed and ease) easily goes to the cost of readability. I also don’t have the perfect solution, but over the years I developed some conventions for myself that help me. Non of these are revolutionary, but well here they are:
– Since Max 6 I only works in projects not on single patcher anymore.
– Every project has only one top-lever patcher that I always name main (borrowed from Java programming);
– The top-level patcher shall contain as few objects as possible. Usually they only contain a few abstractions that allow me to see the main data/signal flow. All The rest is done in abstractions.
– I try to think about encapsulation in the way I would handle classes in text-based languages. So I encapsulate by means of logical function.
– Even if I use a subpatch only in one project I usually use an abstraction, not a simple subpatcher. Like this I have a list of all functional elements in the project window.
– The names of abstraction that are only written for the current project are preceded by a project prefix (prefix.abstractionName) – kind of pseudo names spacing – to be able to recognize their scope easily.
– In the top level patcher I keep a copy of every global variable [v] that are used in the project in a certain section. (Just to keep overview and it also helps in debugging). The same for pv in abstractions.
– same for all dicts in the project.
– writing convention for object names like send/receive: I always use camelCase and prefix them with the same prefix as the abstractions (to make sure that two projects do not interfere unintendedly)
– And than of course I separate the UI from the logic. All ui-objects reside in on area of the top level patch or inside a bpatcher if the UI becomes more complex. (pvar, pattr [I started to love the pattr-system] etc are my friends).
– And I use some conventions in color-coding important abstractions by the means of they function (I.e. an abstraction that contains my "audio-engine" will have a certain color).
As I said nothing groundbreaking….
yeah, thats an issue I’ve with segmented wires too, also the fact it you move the objects around, you then need to mess re-route the wires again.
I also don’t like the segment option as i prefer to use click n drag connecting for wires.
a bit more playing and looks like getting in the habit of aligning might help just found the align connections option which works nicely, and routing lots of wires in one go off selection is not bad either.
few things, I haven’t figured..
– pushing wires behind objects.. is this possible?
– sometimes when I multiple select objects, i get route patch chords and remove all segments options (to do all)… other times I don’t.
(actually i think this is a bug (6.1.6) , as if i move things, sometimes the options return)
also window management, when I start using abstraction/subpatchers, i end up with lots of windows open (sub patches) to move between. is there an easier way to do this?
I notice you can say for a sub patcher to display as a tab on parent, but you have to do this every time, and it only works one level deep. ( i guess the later i can live with) – any way to get around these limitations?
I guess I might get a bit more comfortable when I learn the keyboard shortcuts, to minimize all the mouse work I’m doing at the moment :)
some none layout questions..
are there any performance disadvantages with using send/receive.
(I’m using s #0_x, r #0_x most of the time) – I know i have to be careful with indeterminate receive order (correct?)
when communicating with sub patches, do you put send/receives inside activated directly? or always send via inlets? (i do the later)
coll/tables, i guess these are in a global namespace (unless you use #0?)
i wondering how to updates these ‘nicely’, if I do inside a sub patch (likely to keep things tidy), then the usage all becomes a bit ‘hidden’, but i cannot pass them in? or can i? (symbol name?)
sorry loads of questions, i guess I’m trying to find a style that works for me, so I don’t end up with a pile of unreadable/maintainable patches.
thanks jan, our posts overlapped… but some interesting ideas there…
I like the idea of the naming conventions…
I use projects, so this looks like it could be an interesting way to go, I guess I had worried about having too many abstractions/subpatches, as I was wondering if performance would be hit (additional calling overhead), but sounds like that should be ok.
Do you (or others) have any examples I could look at?
would be interesting to see how it all fits together.
I guess I’m getting used to whats possible in Max, but not see enough to know what the ‘conventions’ are.
Thanks again to all for the pointers.
Hi technobear,
Personally I don’t have any bad experiences with using abstractions, regarding the performance. Same for send/receive pattr etc. For me it is a cost/benefit calculation. It seems to be true that some options show different performance when setting up test scenarios. Compared to signal processing, matrix/openGL calcs the performance gain possible in the scheduler is often really negligible. So optimizing image and sound processing is what I usually focus on. (And a readable project will definitely increase the performance of the developer :) )
Worth also mentioning BPatcher. I find it very useful as I often work on projects I want to be modular, but with UI elements accessible.
Other useful patching shortcuts:
– Holding shift when making connections allows you to connect to multiple inlets from one outlet.
– Holding alt while selecting an area of the patch also selects patchcords
– Holding cmd and making an area selection allows you to re-size multiple objects.
Hello all!
What a nice thread :)
Jan, I was surprised how similar our approaches to organize patches are. I’m also use naming conventions, prefixes, and a list of sends and values on the top patches (I think of it like of API docs). So bellow I’ll describe some differences in detail.
– I use packages instead of projects. I tried to use projects few times, but every time I encountered strange problems and I gave up. So now for every new project I create new package directory structure with simple bash script.
– I name all global variables with UPPER case, like in text-based languages.
– Usually I have two or three top-level patches separated by functionality. I name this patches starting with "+" sign, so they always on the top of the file list in the finder/explorer. For example: +control.maxpat, +sound.maxpat, +video.maxpat, etc.
– I always color send/receive and value objects. They represents "holes" in message flow and shared state respectively, and it is easy for me to keep them in mind analyzing patch when they are colored.
– I’m using period or underscore naming conventions instead of camelCase, because it is really easy to select and edit one part of the name when they separated with "_" or "." by simple double-clicking on that part.
– Some guys mention here that segmented patch cords poorly affects on understanding how patch works. I solved this problem by coloring patch cords. I think patches looks a lot cleaner with segmented cords.
– And of course abstractions. I don’t like subpatches mostly because they make managing project with VCS harder (especially when merging branches).
For separating UI from logic I use Jamoma framework. It is a brilliant piece of software that really helps me to structure my patches with MVC approach. Except a bunch of helpful utility externals/abstractions it allows to easily control the patch from CUE-scripts (video). Another killer-feature of Jamoma – [jcom.pack≈] external that allows to pack up to 32 MSP signals to single patchcord. Very handy when dealing with multichannel setups. If someone interested I can recommend to read original paper on Jamoma basics.
Also a month ago I’ve discovered another MVC framework based on pattr system — tLb. I didn’t tried it yet, but it seems the guy who developed it made a good work. He describes implementation details on the site.
By the way, guys, are you using a VCS for your projects? I found it a little tricky and interested in what tricks are you using when managing Max projects with VCS?
—
@technobear, I can suggest you to not worry about performance before you actually encounter problems with it. From my experience abstractions/subpatches do not cause noticeable performance degradations.
thanks again some great ideas and those shortcuts are very handy.
ok, so seems some agreements on using abstraction (rather than sub patches)
so how do you make working with this easier?
… is there a way of creating an abstraction from a collection of objects?
(like you can make a sub patch) or do you just use encapsulate, then save the patch?
… how can you rename abstractions easily (I’m thinking code refactoring here)
as far as i can tell, id have to rename everywhere its used, and also to perform the rename i have to use finder… or am i missing something?
seems a bit of a pain, but still ok.
yeah, a wiki article would be cool…
(as would be a ‘dummy’ project, which showed these kind of ‘coding standards’ in use)
seems there are a lot of tools we can use to tidy things up (I’m looking at pvar, pattr pv which look interesting)
one thing that Im struggling with, is sharing non-basic data types e.g. coll, table,dict.
Id like to be able to create an abstraction, that does something to an existing (e.g. coll)
(a bit like passing in a reference in C++/java)
eg. lets say I have in my main patch two colls called MajorScale and MinorScale I want to able to have 2 instances of a (single) abstraction called Transpose, in one instance I want it to use MajorScale in the other MinorScale seems I cannot, as Transpose would have to know the name of the coll, and I cannot pass it in…. (or can i using some magic max?)
e.g. Id like to do something like Transpose @scale MajorScale, Transpose @scale MinorScale
currently, I’m trying to approach the problem a different way, but its a bit of a pain.
@Rick, Thanks,thats exactly what I was looking for :) … id come across #0, but not #1 etc.
@Holland, thats cool too, useful for table and coll.
Only thing is, its a shame refer does not exist dictionary.
Im a bit confused about the dictionary help text, it reads that if you attach dictionaries, and use the dictionary message you get a reference, but when I tested this, I always get a clone
e.g. below Id like the ‘unnamed’ dictionary in the centre, to be made to refer to dA or dB.
such that if I change it (using set method) that it will also change the reference dict.
(like refer does for table/coll)
<code>
----------begin_max5_patcher---------- 708.3oc2Y1sjZBCFF9X3pfISOz5PRH7SOydazYGmfj5lczfSHZc6N68dIIfZ 2p6RQLk5AhS9Cd4IueIeQew2CjWtmUAB9Rv2B77dw2yyTktBulxdf0z8KVQq LcCrlUUQWx.SrsoX6Ul5qXp.Z9h.DNpsMw10bwJlxLPTSkeuTnp3+joqChlF drukaUscF1TqsJ0yaXVIB.AOzzDuv7XKye5yDL3jatft1zavLImtpskMT0hG 4hkykrEJ6MCkAmRlD.iI0pHHAquBSmFF7fdHu56quL4JwRAeghWJnxmCJ9pq ASTR+.STnlEYlqog2btLy4bItWbAFAu4b4SHWCCbZ+LIPSXSiWAiLgR8kF05 OmIO6aN7fDj0RSwjyYBZ9JiHC+anB5RTgKTfIAfbpX44AT1UAnTyWDClPg++ aW5WrCNMdJp1hfPj6d+R+V0EmjnYxcmegzKZTuobxfYWtDMVZyYw4HAdMKnf CO5OtOCfP8K.hjLbq2JX+nVImMqkq2sDcIvbLoHMeN74rLhbMdHXlERYlUbL PpYLUzcrh40Og5gLmpTRd9Vk8DBdGXkGfUaeJNwQbA6hoMC5GpP11iYDQhcc XKpegs333igs2nSYzREHB6bpD1KpDQzVuAAJuSr5fblqgHbEl1ybBPlsAsG4 XDGt9tyAyFIyA8KyUBxj35+ro.yCBrhKd6uSiQq55+84kpxsxEs3occqfi5s fUo3BplZmzIsA8jN8HunfINcK+BdkVjEGT4e3P5rdB6hdhcldvvQldHcPO5b jbjdLu5vOPOuQz2T9D2A9fcHeR6Bebn+IsK7Iycw6jtvmT2Ne8gwWD2FeMhz CdjseQmziC4STmmuftQOIiK8n+6XtcyW1zgna1riIqZtmFoTmP5SkRcw3Ilh bgsn47L.IaGus+1NPk0o5opyZaqzlI39zXfu947p+uvl0Yyc -----------end_max5_patcher-----------
</code>
+1 for wiki page. I’ve created one:
Will fill it with examples when I got some free time.
maybe you expect too much. dont forget that five max objects can do what needs 2 pages of text in machine language or in C++. you create a buffer~ in max and start using it – without the need to define variables for the namespace or care about the memory management yourself.
many of us have a love-hate-relationship with max connections, but that what max is all about. :)
-110
Forums > MaxMSP
|
https://cycling74.com/forums/topic/tips-on-structuringlaying-out-patches/
|
CC-MAIN-2015-48
|
refinedweb
| 2,293
| 63.09
|
- Repeat the following until the user just presses Enter
- Ask the user for a date of the form February 28, 2010 (you will need to use a StringTokenizer to break apart the pieces)
- Print out tomorrow's date like: March 1, 2010
- To help shorten your code, please:
- Create an array of month names
- Create an array of days per month
- Now, you can easily check to see how many days are in the month you're in to see if tomorrow should be in the next month
- Try to combine the cases of months with 30 days, months with 31 days, and February
- You will likely want a separate case for the last day of December
January -> 31 days
February -> 28 days
March -> 31 days
April -> 30 days
May -> 31 days
June -> 30 days
July -> 31 days
August -> 31 days
September -> 30 days
October -> 31 days
November -> 30 days
December -> 31 days
import java.util.*; public class Proj5 { public static void main(String[] args) { Scanner s = new Scanner(System. in ); String [] months = { "January" , "February" , "March" , "April" , "May" , "June" , "July" , "August" , "September" , "October" , "November" , "December" }; int [] days = {31,28,31,30,31,30,31,31,30,31,30,31}; String date; do { System. out .print( "Enter a date (Month Day, Year): " ); date = (s.nextLine()); if (!date.equals( "" )){ StringTokenizer st = new StringTokenizer(date, " ," ); String monthIn = st.nextToken(); int day = Integer.parseInt(st.nextToken()); int year = Integer.parseInt(st.nextToken()); for ( int i=0; i<12; i++){ for ( int j=0; j< 12; j++){ if (months[i].equals(monthIn)){ } //if (day[j].equal(days)) } } while (!date.equals( "" ));
I'm having trouble on the If statements I'm not sure how to get my days of each month to line up after it finds the month they typed in. And then how to group months together.
|
http://www.dreamincode.net/forums/topic/218542-get-date-then-print-date-1-day-trouble-on-if-statements/
|
CC-MAIN-2016-40
|
refinedweb
| 303
| 63.93
|
I got used to my touch pad, that allows to scroll smoothly and very exactly, but I can not to simulate it by java robot - mousewheel is getting only integer parameters and a scrolling carried by steps. Can I simulate smoothly scrolling in java?
robot.mouseWheel(a); // int a
The unit of scrolls will always be by "notches of the wheel" (before you ask: that's how the measurement is named in the docs). This is simply how it's implemented by the hardware (to be more specific: the mouse). How many pixels are scrolled per "notch" is nothing but OS-configuration. You can't mess with that with pure java and I wouldn't recommend it, even if it was possible.
What you can do nevertheless is to slow down the speed at which the robot scrolls:
import java.awt.Robot; public class Test{ public static void main(String[] args) throws Exception { //time to switch to a specific window where the robot ought to be tested try{ Thread.sleep(2000); }catch(InterruptedException e){} Robot r = new Robot(); for(int i = 0; i < 20; i++){ //scroll and wait a bit to give the impression of smooth scrolling r.mouseWheel(1); try{ Thread.sleep(50); }catch(InterruptedException e){} } } }
|
https://codedump.io/share/7vtRqsvW1Lvw/1/how-to-do-smooth-scrolling-by-java-robot
|
CC-MAIN-2017-09
|
refinedweb
| 207
| 62.58
|
Opened 7 years ago
Closed 7 years ago
#17182 closed Cleanup/optimization (fixed)
Best Practice for forms.Form.clean
Description
I would advocate changing the example from:
def clean(self): cleaned_data = self.cleaned_data # ...
to
def clean(self): cleaned_data = super(ContactForm, self).clean() # ...
for the sake of painless form inheritance. Any opposition?
Attachments (2)
Change History (9)
Changed 7 years ago by
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
Calling
super() is probably good practice, however if we change the code sample, then the documentation around it should also be modified since it refers to
self.cleaned_data.
Playing the devil's advocate, though, I'm not sure that this change would add much value in that part of the documentation. I'm wary it could be slightly confusing for beginners. Also it feels that by the time someone gets to deal with form inheritance, then they're not beginners any more and therefore should already know better about how to use
super().
I don't feel strong enough either way so I'll let someone else intervene to triage this ticket :)
comment:3 Changed 7 years ago by
I think it's worth changing to use super(). For a ModelForm, you need to call the superclass clean() in order to get uniqueness checking done. Yes, we note that in the ModelForm doc, but I think it is often missed. Having the clean() example (even if it's a plain form clean) show the best practice makes sense to me.
comment:4 Changed 7 years ago by
I would add a very short note about why the call to super - "to maintain validation logic in the parent classes, use super to call the parent's clean method"
Changed 7 years ago by
Added documentation note
Though I guess it might be helpful to add explanation that the base
forms.Form.cleanmethod simply returns
self.cleaned_data. I'm wary of complicating things though.
|
https://code.djangoproject.com/ticket/17182
|
CC-MAIN-2018-51
|
refinedweb
| 327
| 59.53
|
perl5153d.
Perl 5.15.2 introduced subroutines in the CORE namespace. Most of them could only be called as barewords; i.e., they could be aliased at compile time and then inlined under new names.
Almost all of these functions can now be called through references and via
&foo() syntax,
bypassing the prototype.
See CORE for a list of the exceptions.
The debugger now has
disable and
enable commands for disabling existing breakpoints and reënabling them.
See perldebug..
The array/string index offsetting mechanism,
controlled by the
$[ magic variable,
has been removed.
$[ now always reads as zero.
Writing a zero to it is still permitted,
but writing a non-zero value causes an exception.
Those hopelessly addicted to FORTRAN-style 1-based indexing may wish to use the module Array::Base,
which provides an independent implementation of the index offsetting concept,
or Classic::Perl,
which allows Array::Base to be controlled through assignment to
$[. behaviour.
The ordinary
XS(name) declaration for XSUBs will continue to declare non-'static' XSUBs for compatibility,
but the XS compiler,
ExtUtils::ParseXS (
xsubpp) will emit 'static' XSUBs by default.
ExtUtils::ParseXS's behaviour can be reconfigured from XS using the
EXPORT_XSUB_SYMBOLS keyword,
see perlxs for details. in the generated XS code has been simplified.
The previously broken "INCLUDE: ... |" functionality has been repaired (CPAN RT #70213).
A compatibility-workaround for modules that cannot live with the new XSUB staticness (see XSUBs are now static above) has been implemented with the
PERL_EUPXS_ALWAYS_EXPORT and
PERL_EUPXS_NEVER_EXPORT preprocessor defines.
The compiler warnings when -except option is used with xsubpp have been fixed.
The XSUB.h changes to make
XS(name) use
XS_INTERNAL(name) by default (which were in the 5.15.2 dev release of perl) have been reverted since too many CPAN modules expect to be able to refer to XSUBs declared with
XS(name).
Instead,
ExtUtils::ParseXS will define a copy of the
XS_INTERNAL/
XS_EXTERNAL macros as necessary going back to perl 5.10.0.
By default,
ExtUtils::ParseXS will use
XS_INTERNAL(name) instead of
XS(name).
Fixed regression for input-typemap override in XS argument list (CPAN RT #70448).
ExtUtils::Typemaps".
It is now better at detecting the end of a pod section. It always checks for =cut, instead of checking for =end (if the pod begins with =begin) or the end of the paragraph (if the pod begins with =for) [perl #92436].
It is also better at detecting variables. A method call on a variable is no longer considered part of the variable name, so strings passed to a method are now hidden from filters that do not want to deal with strings [perl #92436]..
POSIX no longer uses AutoLoader. Any code which was relying on this implementation detail was buggy, and may fail as a result.
POSIX::Termios::setattr. ext/POSIX/t/unimplemented.t added to test the diagnostics for unimplemented functions. ext/POSIX/t/usage.t added to test the diagnostics for usage messages. ext/POSIX/t/wrappers.t added to test the POSIX wrapper subroutines.
goto &xsuband hints.
&foo()calls for CORE subs.
Remove unnecessary includes, fix miscellaneous compiler warnings and close some unclosed comments on vms/vms.c.
Remove sockadapt layer from the VMS build.).
defined(${'$'})stopped returning true if the
$$variable had not been used yet. This has been fixed.
defined(${"..."}),
defined(*{"..."}), etc., used to return true for most, but not all built-in variables, if they had not been used yet. Many times that new built-in variables were added in past versions, this construct was not taken into account, so this affected
${^GLOBAL_PHASE}and
${^UTF8CACHE}, among others. It also used to return false if the package name was given as well (
${"::!"}) and for subroutines in the CORE package [perl #97978] [perl #97492] [perl #97484].
defined(*{"foo"})where "foo" represents the name of a built-in global variable used to return false if the variable had never been used before, but only on the first call. This, too, has been fixed.
readline, etc.) used to call
FETCHmultiple times, if it was a tied variable, and warn twice, if it was
undef[perl #97482]..].
*{ ... .
local @{"x"}; delete $::{x}), resulting in a crash on scope exit.
setpgrp($foo)used to be equivalent to
($foo, setpgrp), because
setpgrpwas ignoring its argument if there was just one. Now it is equivalent to
setpgrp($foo,0).
*$tied = \&{"..."}and
*glob = $tiednow call FETCH only once.
chdir,
chmod,
chown,
utime,
truncate,
stat,
lstatand the filetest ops (
-r,
-x, etc.) now always call FETCH if passed a tied variable as the last argument. They used to ignore tiedness if the last thing return from or assigned to the variable was a typeglob or reference to a typeglob.
].
\&$tiedcould either crash or return the wrong subroutine. The reference case is a regression introduced in Perl 5.10.0. For typeglobs, it has probably never worked till now.
givenwas not scoping its implicit $_ properly, resulting in memory leaks or "Variable is not available" warnings [perl #94682].
-lfollowed by a bareword no longer "eats" the previous argument to the list operator in whose argument list it resides. In less convoluted English:
print "bar", -l foonow actually prints "bar", because
-lon longer eats it.
-r,
-x, etc.) started calling FETCH on a tied argument belonging to the previous argument to a list operator, if called with a bareword argument or no argument at all. This has been fixed, so
push @foo, $tied, -rno longer calls FETCH on
$tied.
shmreadwas not setting the scalar flags correctly when reading from shared memory, causing the existing cached numeric representation in the scalar to persist [perl #98480]..
behaviour has been restored.
If make is Sun's make≥,.
|
http://search.cpan.org/~drolsky/perl-5.15.6/pod/perl5153delta.pod
|
CC-MAIN-2013-48
|
refinedweb
| 942
| 65.73
|
Introduction
Nakano, K., et.al.(refs below) describe two versions of a small stack machine suitable for implementation on an FPGA and they give the Verilog source code on their web site. The design was ported to the DE2 board and extended to have a richer set of opcodes and i/o ports. I wrote a simple assembler and compiler for the architecture and implemented serial communication routines. The compiler supports inline macros, functions, one dimensional array variables, and the usual if-then-else-endif and while-do-endwhile structured programming. Supplied functions allow you to send and receive integers to the serial interface and to send strings and integers to the LCD.
The newest cpu and example hardware, example software, quartus archive, and compiler can be used to implement the multiprocessor shared-memory VGA graphics example below.
CPU
The cpu is a stack machine. The registers are arranged in a stack with the usual range of arithmetic operators and stack manipulation instructions. There are no named registers except top-of-stack (named
top) and next-to-top-of-stack (named
The CPU design is from and is GPL licensed. I added load/store opcodes and program counter push/pop. Load and store enable register indirect addressing, while PC manipulations enable functions. The single cycle version with a stack size of 16 occupies about 1100 logic elements (out of 33,000 on the DE2). Reducing the stack size to 8 drops the size to 940 logic elements. Setting QuartusII to optimize for speed (See
Tools...Advisors...Timing optimization advisor) increases size to 1000 logic elements and increases the operating frequency to 74 MHz.
The cpu was extended to allow for up to eight in/out ports, with four appearing outside the cpu module. I/O addressing is shown in the opcode table below. The remaining four will be used for internal i/o, perhaps with a timer peripheral. The architecture taken from Nakano, K., et.al is shown below, but modified for 8 i/o ports and with an extra connection from the PC to the data bus (dbus).
CPU opcodes and instruction format
The cpu instruction set is shown below. Notation:
Iis a 12-bit signed integer
Ais an unsigned 12-bit integer
nis a 3 bit integer (used only as an i/o address)
topis top-of-stack
nextis second-to-top-of-stack
PCis the program counter
mem[A]is the contents of memory location
A.
See this page for older versions and develpment sequence.
Simple Compiler (Syrup)
A simple compiler, named Syrup, was written in matlab (also runs in Octave) to make programming easier.
A source code is assembled into a mif file, which can be read by the ram block. The link between the mif file and the Pancake cpu ram block is implemented with a synthesis directive in the same statement as the memory declaration and before the semicolon which terminates the declaration.
reg [DWIDTH-1:0] mem [WORDS-1:0] /* synthesis ram_init_file = " test_stack_machine.mif" */ ;Search for
/* synthesis ram_init_file in the verilog file containing the cpu and modify the mif name.
If you change the
mif file by recompiling (but keep the mif file name the same), then you can change the memory contents without having to rebuild the whole project:
(1) Click
Update Memory Initialization File on the
Processing menu.
(2) After using this option, run the QuartusII Assembler (
Processing menu...Start...Start Assembler) to generate new programming files for the device.
The compiler_v1_15 syntax is stack based and written in Matlab (version 1.0, compiler_v1_12). A description follows:
constantkeyword.
constant
key3mask 8 ; name -- value pair
redLEDs 3 ; port 3
variablekeyword.
variable
test ; define a 1 word variable
particle 100 ; define array of length 100
inlinekeyword.
inline/endinline.
inline
inc 1 add
endinline
functionkeyword.
function
getchar
putchar
delay_10
puthex
programkeyword. Example:
program ; this section contains the actual program main: 0 =count ; loop forever waiting for human input while forever ; never exit do "enter> putstr gethex crlf puthex space count puthex crlf count 1 add =count endwhile ; end of infinite loop
main:.
redLEDs key3mask add =testputs the value 11 into the variable
test.
var1[var2]treats
var2as an index into
var1and places the value of the indexed variable on the stack.
=varor
=var1[var2]stores the value on the stack to the appropriate location.
countby pushing the memory value onto the stack, pushing 1 onto the stack, adding them, and popping the stack back into memory.
count 1 add =count
..
add, sub, mul, mfix, shl, shr, asr, band, bor, bxor,
and, or, eq, ne, ge, le, gt , lt, neg , bnot , not, drop, over
"stringplaces the
stringon the stack (with a character count) to be printed by
putstr(see below).
"enter>places the characters
enter>on the stack with the
eat next-to-top of stack and the character count at top-of-stack..
putstr.
in[const]and
out[const].
redLEDsdefined above
out[redLEDs]
funct_entry:
funct_entry. The exit point of a called function is indicated by
return.
inline make_vga_addr
8 shr 9 shl =temp
8 shr temp add
endinline
y[count] x[count] make_vga_addr out[vga_addr]
if then else endif.
if x[count] 480 gt
then 478 =x[count]
endif
while do endwhile.
counter2to zero and increments the counter until it overflows.
while counter2 0 ne
do counter2 1 add =counter2
endwhile
opcode.operandor
opcode.if there is no operand.
inline swap
pop.4 ; locs 4&5 are hidden temp locations
pop.5
push.4
push.5
endinline
The compiler generates code to initialize the return stack and then jumps to
main.
Code starts executing at memory location zero, but your program starts at
main:.
The return stack is allocated in high memory, with variables just below. There is no collision detection between code and variables.
Note that the parser is really stupid! No are spaces allowed between equal sign and variable name. No spaces allowed in indexed variable syntax.
Compiler wish list: Local variables, nested inlines, "include", variable/array initialization
Compiler Example
A short example shows how to blink LEDs. It shows the five basic sections of a program:
; This program demos compiler ; with LED output and button input ; ================================== constant ; named constants key3mask 8 key2mask 4 keys 1 ; port 1 keymask 15 ; 0x0f pattern2 255 ; 0xff pattern3 15 ; 0x0f redLEDs 3 ; port 3 greenLEDs 2 ; port 2 forever 1 ; endless loop ; ================================== variable test ; location to push test data counter1 ; outer loop counter counter2 ; inner loop counter ; ================================== inline inc 1 add endinline ; ================================ function evalkey ; ================================= program ; this section contains the actual program main: 0 out[greenLEDs]; reset the green LEDs 0 =counter1 ; init counter while forever ; never exit do counter1 inc ; get the counter and increment dup ; copy stack top out[redLEDs] ;output one copy, one on stack to store =counter1 ; save the counter ;slow it down with an inner loop counter 1 =counter2 ; reset and store inner counter while counter2 0 ne ; compare stack top to zero do counter2 inc =counter2 ; inc the counter endwhile ;end of inner loop ; detect some button presses if ; is KEY[3] pressed? key3mask evalkey ; detect 4th bit set then ; key 3 is pressed pushi.pattern2 out.greenLEDs else ; key 3 is not pressed pushpc. out[greenLEDs] endif endwhile ; end of outer loop ;=== read keys function ==== ; enter with a switch selector bit on the stack ; exits with a TRUE/FALSE for match/nomatch on stack evalkey: in[keys] bnot ; invert so key-down==1 keymask band ; use only lower 4 bits eq ; compare to specific_keymask return ;===end of code ============================
Multiprocessor graphics
Three fast processors were hooked up to SRAM to control the VGA. A hardware SRAM memory multiplexer was built to give priority to reset, then to the VGA controller, then each of the three cpus. The source code has to signal that it wants SRAM access, then wait for SRAM available, then read/write and then signal completion. SRAM access is interleaved between the VGA controller and the three cpus. The VGA controller gets access on every VGA clock high, while the cpus share every VGA clock low. This works becuase memory is being clocked twice as fast as the VGA clock. On every VGA clock high, an address is set up based on the VGA address generator. On the VGA clock low, the SRAM data for the VGA is buffered into a register, while the address for the cpu read/write is set up. On the next VGA clock high, the SRAM data is buffered into a register for each cpu, while the next VGA controller read is set up. Execution time for the code speeds up by a factor of five for 1200 particles on each cpu producing an aggregate of around 32,000 particles
A ROM character generator for VGA was built, based on the data from ECE 320 at BYU. The file from BYU is here, and the matlab program to convert it to an Altera mif file is here, and the mif file is here. The ascii character code is multiplied by 16 to from the base index for a character. The data at the base index location is the top byte (of 16) of the character image. The high order bit of the byte is the left-most pixel of the top line of the character. The ROM was connected to i/o ports on the stack processor, cpu 1, where a small routine reads the ROM and outputs colors to the VGA SRAM interface.
Multiprocessor data sharing
The SRAM interface to the VGA display actually has over 100,000 unused bytes which are not displayed, but the unused memory is in small chunks. The biggest piece of available memory is from address 246,400 to 262,144, or about 16 kbytes. These unused locations can be used to share non-graphics data between processors. We need 16-bit read/write functions and a mutex to lock memory. The SRAM switch used in the graphics functions above was extended with new functions to allow 16-bits to be written (the graphics interface writes only single bytes). The mutex is implemented using hardware test-and-set, clear, and read instructions. The hardware switch prioritizes memory access first, then mutex operations. On the processor side, the program must: (1) set up an sram read, write, or mutex operation, (2) assert a request, (3) wait for access achnowledgment, (4) do the read/write (5) de-assert request. The test program computes a diffusion-limited aggregation, as above, maintains a shared (mutex protected) count in sram, and maintains a shared run/done flag in sram. (hardware, software, archived project). And a video of the aggregate growth. The image below shows two counters in the upper left. The green counter is from shared, mutex protected memory. The red counter from shared, unprotected memory. The unprotected count is almost always lower than the protected count because of rare (but inevitable) overlap of two cpus trying to update the count at the same time. Another set of mutexes guarantees that cpu 2 and 3 will finish before cpu 1 tries to print the final count. It does this by setting a lock for cpu 2/3, then having each cpu clear its lock when it finishes.
The processor interface to the memory switch uses several i/o ports:
The control word on out2 has the following format. The 4 request lines are mutually exclusive.
The seven functions to access shared memory/mutex are written as inline functions described in the table below. The test-and-set operation on a mutex is atomic. It is guaranteed that if two cpus both try to set a mutex at the same time, only one will succeed, and they both will agree which one has set it.
References:
Nakano, K.; Ito, Y., Processor, Assembler, and Compiler Design Education Using an FPGA, Parallel and Distributed Systems, 2008. ICPADS '08. 14th IEEE International Conference on; 8-10 Dec. 2008 pages: 723 - 728 (Nakano, K.; Ito, Y.; Dept. of Inf. Eng., Hiroshima Univ., Higashi-Hiroshima, Japan)
Nakano, K.; Kawakami, K.; Shigemoto, K.; Kamada, Y.; Ito, Y. A Tiny Processing System for Education and Small Embedded Systems on the FPGAs, Embedded and Ubiquitous Computing, 2008. EUC '08. IEEE/IFIP International Conference, Dec. 2008 pages: 472 - 479
John S. Loomis, Digital Labs using the Altera DE2 Board,, Electrical and Computer Engineering, University of Dayton, Dayton, OH 45469-0232
March 20, 2013
Bruce Land
|
https://people.ece.cornell.edu/land/courses/ece5760/DE2/Stack_cpu.html
|
CC-MAIN-2017-51
|
refinedweb
| 2,064
| 62.27
|
I'd like to start at the top of a WMI namespace, recurse through all the objects, then recurse through each object's property list, filtering out and returning to console only those properties that have mem in their names.
This is what I have so far:
gwmi -namespace "root\cimv2" -list |???? |get-Member -MemberType property | Where-Object { $_.name -match 'mem'}
Notice the big |???? in the middle there. If I remove that, the the command seems to run, but fails to find properties I know should be found. Why? I think it is because I get different output from the following two commands:
gwmi "Win32_operatingSystem" |get-Member -MemberType property (73 lines of output)
gwmi -namespace "root\cimv2" -list |where-object { $_.Name -eq 'Win32_OperatingSystem' } |get-Member -MemberType property (10 lines of output)
What I want is a one-liner that recursively concatenates this process:
gwmi -namespace "root\cimv2" -list
(manual selection of a WMI class from the list of 1000+ which appear)
gwmi "win32_OperatingSystem" | get-Member -MemberType property | Where-Object { $_.Definition -match 'mem' }
Thanks, and if a working answer is given, I will accept and upvote it (annoying when people never do that, isn't it?).
Note added 2009/11/14: I haven't awarded the answer yet because no one has yet provided a one-liner which solves the problem.
I think this will do what you are looking for in one line. the CIMV2 namespace is there by default but if you wanted to select a different one, you can use gwmi -namesapce.
The "trick" is nesting a where-object in side the foreach-object
gwmi -list | % {$_.Properties | ? {$_.Name.Contains('Memory')}} | select Origin,Name
It looks like that this:
gwmi -namespace "root\cimv2" -list
returns an array of ManagementClass .Net objects, so you can use ManagementClass.Properties collection to select properties that have a certain string in their names. Here is my PowerShell script:
foreach($class in gwmi -namespace "root\cimv2" -list)
{
foreach($property in $class.Properties)
{
if($property.Name.Contains('Memory'))
{
$class.Name + ' --- ' + $property.Name
}
}
}
As you can see I am a PowerShell beginner, but I think you can make a 'one-liner' from that.
I think by listing the namespace you get WMI CLASS Objects, but not the actual object instances - which you get by gwmi "win32_OperatingSystem"
If you use gm you will see:
TypeName: System.Management.ManagementClass#ROOT\cimv2\Win32_OperatingSystem
vs
TypeName: System.Management.ManagementObject#root\cimv2\Win32_OperatingSystem
edit:
You could do something like this:
gwmi -namespace "root\cimv2" -list | %{ gwmi -class $_.name.tostring()}
gwmi -namespace "root\cimv2" -list | %{ gwmi -class $_.name.tostring()}
and if you want all properties with mem* then you could try
| select-object "mem*"
| select-object "mem*"
but I'm not sure if that is really what you want.
I think this is very ineffective if you just need to know the amount of memory. What do you really want to have as output?
It's a little late here, but i'm pretty sure the below will get you to where you are looking to get - that is a listing of all the properties in the WMI namespace that have "mem" in thier name
foreach ($i in gwmi -namespace "root\cimv2" -list ){$i.properties | where-object {$_.name -match 'mem'}| format-table origin,name}
Download Microsoft's Scriptomatic2 and PowershellScriptomatic. They're both hta apps, so you can view them as plain text to see how
8 months ago
|
http://serverfault.com/questions/61478/powershell-recursive-wmi-query
|
crawl-003
|
refinedweb
| 575
| 53.81
|
problem with snmpkit and std::string is more complex than I thought
please do not apply the patch that I sent to the list;
I have to work on it
a.
----- Forwarded message from debian -----
Date: Thu, 26 Jul 2001 15:52:59 +0200
To: Matthew Wilcox <willy@...>
Cc: Matthew Wilcox <willy@...>, 106475@...
Subject: Re: Bug#106475: Won't build on hppa
User-Agent: Mutt/1.2.5i
In-Reply-To: <20010726140225.A30158@...>; from willy@... on Thu, Jul 26, 2001 at 02:02:25PM +0100
"Mail-Followup-To: debian@..."
hi
On Thu, Jul 26, 2001 at 02:02:25PM +0100, Matthew Wilcox wrote:
> On Thu, Jul 26, 2001 at 01:47:52PM +0200, A Mennucc1 wrote:
> > I have though resolved to using a smaller patch (see attachment);
> > I have also disable the compilation of snmptest3.C
> > as long as I am discussing your patch with the author:
> > indeed, doing strcmp(a,b) or strncmp(a,b,strlen(a))
> > have NOT the same effect in general
>
> > diff -ru snmpkit-0.7.gcc295/src/snmpkit snmpkit-0.7/src/snmpkit
> > --- snmpkit-0.7.gcc295/src/snmpkit Sat Dec 2 05:26:39 2000
> > +++ snmpkit-0.7/src/snmpkit Thu Jul 26 12:01:41 2001
> > @@ -35,6 +35,8 @@
> > #include <list>
> > #include <functional>
> >
> > +using namespace std;
> > +
> > #include "snmpkit_tags"
> > #include "snmpkit_except"
> >
>
> No, you MUST not do this! This is a header file which
> gets installed, right?
right
> So a c++ program doing #include
> <snmpkit> is suddenly using namespace std, even if it
> doesn't want to.
I am not good at c++ but I see I have a problem here
but I don't like your patch either, changing all `string' to `std::string' ;
even though most systems use g++, it is not true globally, and
the snmp code was paid for by HP, who may like a better backward portability
I think I will try something with autoconf, when I find time
> Please see the Libstdc++ Porting Howto at
>
thanks, it is quite useful
a.
--
A Mennucc
"È un mondo difficile. Che vita intensa!" (Renato Carotone)
----- End forwarded message -----
--
A
|
https://sourceforge.net/p/lpr/mailman/message/2677898/
|
CC-MAIN-2016-30
|
refinedweb
| 348
| 61.26
|
Hi.
Arnar
>>> "Arnar Birgisson" <arnarb at oddi.is> 18.10.2005 12:03 >>>
Hello there,
I was running mod_python 3.1.3 on Apache 2.0.50 with linux kernel 2.4.31 until yesterday, and all was well. For other reasons, I needed to upgrade to kernel version 2.6.13 yesterday, and along with it I upgraded my distribution from Slackware 10.0 to Slackware-current. That entailed an upgrade from libc 2.3.2 to 2.3.5.
After the upgrade, I recompiled Apache (still version 2.0.50) and compiled and installed mod_python 3.1.4. The problem now is that every request gets a new session id, even if the browser is clearly sending the pysid cookie correctly.
I was and am running python 2.4.1 both before the upgrade and now. I tried uninstalling mod_python completely, and recompiling mod_python 3.1.3 again against the new libc, but that doesn't solve the problem.
I am using að session class that I wrote myself (complete source below) that uses mysql as the data store, and the session info makes it way into the table, which tells me that sess.save() is working properly. The problem seems to be that the constructor (which just calls mod_python.Session.__init__) doesn't find the session id in the request object. I dumped request.headers_in and the pysid cookie is there.
Any ideas on what I should try next?
Arnar
My session class (which worked fine before):
from mod_python import apache, Session as apsess
from Database import getExclusiveDB
from cPickle import loads, dumps
import time
def sqlsession_cleanup():
db = getExclusiveDB()
c = db.cursor()
c.execute("delete from sessiondata where (unix_timestamp() - accessed) > timeout")
c.close()
db.commit()
db.close()
class SqlSession(apsess.BaseSession):
def __init__(self, req, sid=0, secret=None, timeout=0, lock=1):
apsess.BaseSession.__init__(self, req, sid=sid, secret=secret, timeout=timeout, lock=lock)
def do_cleanup(self):
self._req.register_cleanup(mem_cleanup)
self._req.log_error("SqlSession: registered session cleanup",
apache.APLOG_NOTICE)
def do_load(self):
db = getExclusiveDB()
c = db.cursor()
c.execute("select created, accessed, timeout, data from sessiondata where sid = %s", self._sid)
if c.rowcount > 0:
row = c.fetchone()
retval = {
"_data": loads(row[3]),
"_created": row[0],
"_accessed": row[1],
"_timeout": row[2]
}
else:
retval = None
c.close()
db.close()
return retval
def do_save(self, dict):
db = getExclusiveDB()
c = db.cursor()
c.execute("replace into sessiondata (sid, created, accessed, timeout, data) "
+ "values (%s, %s, %s, %s, %s)",
(self._sid, dict['_created'], dict['_accessed'], dict['_timeout'], dumps(dict['_data'])))
c.close()
db.commit()
db.close()
def do_delete(self):
db = getExclusiveDB()
c = db.cursor()
c.execute("delete from sessiondata where sid = %s", self._sid)
c.close()
db.commit()
db.close()
_______________________________________________
Mod_python mailing list
Mod_python at modpython.org
|
http://modpython.org/pipermail/mod_python/2005-October/019374.html
|
CC-MAIN-2020-16
|
refinedweb
| 461
| 53.98
|
From: Jeff Garland (jeff_at_[hidden])
Date: 2002-10-19 20:01:54
> This is with 1.29 so no worry.
Ok, well it is still a worry -- just the other way around...
> > msvcprtd.lib(MSVCP70D.dll) : error LNK2005: "public: __thiscall
> > std::locale::id::id(unsigned int)" (??0id_at_locale@std@@QAE_at_I@Z) already
> > defined in DailyFinanceManager.obj
If it is a result of the library it would need to have something to do with the date_names_put template. In
boost/date_time/date_names_put.hpp:40 is the declaration of a static std::local::id for the iostreams stuff. However this is in
namespace boost::date_time -- this symbol looks like it is in the global namespace. Also, the only instance of the id gets built
into the library (see libs/date_time/src/gregorian/greg_month.cpp) so I'm puzzled...
Jeff
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2002/10/38034.php
|
CC-MAIN-2021-17
|
refinedweb
| 157
| 62.75
|
CIDA structure
Used with the CFSTR_SHELLIDLIST clipboard format to transfer the pointer to an item identifier list (PIDL) of one or more Shell namespace objects.
Syntax
typedef struct _IDA { UINT cidl; UINT aoffset[1]; } CIDA, *LPIDA;
cidl
Type: UINT
The number of PIDLs that are being transferred, not including the parent folder.
aoffset
Type: UINT[1]
An array of offsets, relative to the beginning of this structure. The array contains cidl+1.
Remarks
To use this structure to retrieve a particular PIDL, add the aoffset value of the PIDL to the address of the structure. The following two macros can be used to retrieve PIDLs from the structure. The first retrieves the PIDL of the parent folder. The second retrieves a PIDL, specified by its zero-based index.
|
https://docs.microsoft.com/en-us/windows/desktop/api/shlobj_core/ns-shlobj_core-_ida
|
CC-MAIN-2019-26
|
refinedweb
| 128
| 64.2
|
Back in my college days, I remember the way I used to manually fight with viruses. When I used to kill the first virus,
a second one restarted it, and vice versa.
On an XP based system one barely had any tool by default to suspend any process. Even today if we want to suspend multiple processes instantaneously, it’s not possible. I am sorry for this delay but I designed this software a few years ago (in
a console subsystem with unmanaged code) but never got time to post it. I used this application to hunt viruses, restrict data voracious MS processes, etc.
On a Windows system, processes may run as single thread or may contain multiple threads running simultaneously. Hence to suspend a process it’s a prerequisite
to suspend all the threads associated with that particular process. In .NET I don’t find any facility already provided to suspend a thread. So I switched to WinAPI for this purpose.
The code contains nothing big but just a couple of small files. One for the GUI event handling and another for the declaration of Win32 functions. I will basically avoid talking about the GUI thing as most of us are already aware of it. If someone is not understanding the design, I’ll suggest opening
this project in VS and iterating through the code.
On the left side of the application is a ListBox containing process name comma separated process ID. Using the move button,
we can fetch the selected processes from the left listbox to the right listbox. Operations (Suspend, Resume, Kill) will only execute
on the processes enlisted in the right listbox.
Once the user opens the application all the processes automatically get populated in the right listbox but they are not refreshed until done manually using
the Refresh button.
To get the list of running processes, following is the code sample:
Process[] prcs = Process.GetProcesses();
Plist.Items.Clear();
foreach (Process pn in prcs)
{
Plist.Items.Add(String.Format("{0}, ({1})",pn.ProcessName, pn.Id ));
}
Now the only thing of our interest is pn.Id(Process Id). With this we can do anything we like.
pn.Id(Process Id)
We’ll be passing this pn.Id to the thread suspending routine. Inside the suspend routine,
Process.Thread gives out the thread collection. By enumerating the thread collection, each thread can be brought to a halt.
pn.Id
Process.Thread
The SuspendThread function suspends the thread until it is directed to
be resumed by ResumeThread.
SuspendThread
ResumeThread
A brief code view is provided for declaring and using the suspending and resuming routines.
using System.Runtime.InteropServices;
using System.Diagnostics;
class KernelCalls
{
[DllImport("kernel32.dll")]
static extern IntPtr OpenThread(Int32 dwDesiredAccess,
bool bInheritHandle, UInt32 dwThreadId);
//
[DllImport("kernel32.dll")]
static extern UInt32 SuspendThread(IntPtr hThread);
[DllImport("kernel32.dll")]
static extern UInt32 ResumeThread(IntPtr hThread);
foreach (ProcessThread procthr in proc.Threads)
{
IntPtr pOpenThread = OpenThread(0x0002, false, (UInt32)procthr.Id);
if (pOpenThread == IntPtr.Zero)
{
break;
}
SuspendThread(pOpenThread);
}
}
After this the only thing required to be done is setting the GUI logic. After
implementing GUI handling routines the application becomes ready to be launched live.
Although the code doesn’t look so useful, suppose a condition where a virus infected
the system:
What will you do????!!!!!!!!!!!
That’s what I did when I faced this situation the first time. I made this program, took it in a pen drive, and launched it on my friend’s infected system, enlisted all those ill-natured processes, and fired the "Kill". After a little change in the
Registry values, his computer came back to track…..
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
[DllImport("ntdll.dll", SetLastError = true)]
private static extern void RtlSetProcessIsCritical(UInt32 v1, UInt32 v2, UInt32 v3);
static void Main(string[] args)
{
System.Diagnostics.Process.EnterDebugMode();
RtlSetProcessIsCritical(1, 0, 0);
Console.WriteLine("Terminating this process will cause Instant BSoD");
Console.ReadKey();
RtlSetProcessIsCritical(0, 0, 0);
}
ToolTip.show
if (pOpenThread == IntPtr.Zero)
{
break;
}
Process proc = Process.GetProcessById(pid)
List<IntPtr> ThrLis = new List<IntPtr>();
foreach (ProcessThread procthr in proc.Threads)
{
IntPtr pOpenThread = OpenThread(0x0002, false, (UInt32)procthr.Id);
if (pOpenThread == IntPtr.Zero)
{
ThrLis.Clear();
break;
}
ThrLis.Add(pOpenThread);
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/339722/Process-Suspender-and-Advanced-Terminator?PageFlow=FixedWidth
|
CC-MAIN-2016-18
|
refinedweb
| 735
| 59.19
|
For a program to receive input, either interactively or in a batch
environment, you must provide another program or a routine to receive the
input. Complicated input requires additional code to break the input
into pieces that mean something to the program. You can use the
lex and yacc commands to develop this type of input
program.
The lex command helps
write a C language program that can receive and translate character-stream
input into program actions. To use the lex command, you must
supply or write a specification file that contains:
The format and logic allowed in this file are discussed in the lex Specification File section of the lex command.
The lex command generates a C language program that can analyze an input stream using information in the specification file. The lex command then stores the output program in a lex.yy.c file. If the output program recognizes a simple, one-word input structure, you can compile the lex.yy.c output file with the following command to get an executable lexical analyzer:
cc lex.yy.c -ll
However, if the lexical analyzer must recognize more complex syntax, you can create a parser program to use with the output file to ensure proper handling of any input. See Creating a Parser with the yacc Program for more information.
You can move a lex.yy.c output file to another system if it has C compiler that supports the lex library functions.
The compiled lexical analyzer performs the following functions:
The lexical analyzer that the lex command generates uses an analysis method called a deterministic finite-state automaton. This method provides for a limited number of conditions that the lexical analyzer can exist in, along with the rules that determine what state the lexical analyzer is in.
The automaton allows the generated lexical analyzer to look ahead more than one or two characters in an input stream. For example, suppose you define two rules in the lex specification file: one looks for the string ab and the other looks for the string abcdefg. If the lexical analyzer receives an input string of abcdefh, it reads characters to the end of input string before determining that it does not match the string abcdefg. The lexical analyzer then returns to the rule that looks for the string ab, decides that it matches part of the input, and begins trying to find another match using the remaining input cdefh.
Specifying extended regular expressions in a lex specification file is similar to methods used in the sed or ed commands. An extended regular expression specifies a set of strings to be matched. The expression contains both text characters and operator characters. Text characters match the corresponding characters in the strings being compared. Operator characters specify repetitions, choices, and other features.
Numbers and letters of the alphabet are considered text characters. For example, the extended regular expression integer matches the string integer, and the expression a57D looks for the string a57D.
The following list describes how
operators are used to specify extended regular expressions:
To use the operator characters as text characters, use one of the escape sequences: " " (double quotation marks) or \ (backslash). The " " operator indicates what is enclosed is text. Thus, the following example matches the string xyz++:
xyz"++"
Note that a part of a string may be quoted. Quoting an ordinary text character has no effect. For example, the following expression is equivalent to the previous example:
"xyz++"
Quoting all characters that are not letters or numbers ensures that text is interpreted as text.
Another way to turn an operator character into a text character is to put a \ (backslash) character before the operator character. For example, the following expression is equivalent to the preceding examples:
xyz\+\+
When the lexical analyzer matches one of the extended regular expressions in the rules section of the specification file, it executes the action that corresponds to the extended regular expression. Without sufficient rules to match all strings in the input stream, the lexical analyzer copies the input to standard output. Therefore, do not create a rule that only copies the input to the output. The default output can help find gaps in the rules.
When using the lex command to process input for a parser that the yacc command produces, provide rules to match all input strings. Those rules must generate output that the yacc command can interpret.
To ignore the input associated with an extended regular expression, use a ; (C language null statement) as an action. The following example ignores the three spacing characters (blank, tab, and new-line):
[ \t\n] ;
To avoid repeatedly writing the same action, use the | (pipe symbol). This character indicates that the action for this rule is the same as the action for the next rule. For instance, the previous example to ignore blank, tab, and new-line characters can also be written as follows:
" " | "\t" | "\n" ;
The quotation marks around \n and \t are not required.
To find out what text matched an expression in the rules section of the specification file, you can include a C language printf subroutine call as one of the actions for that expression. When the lexical analyzer finds a match in the input stream, the program puts the matched string into the external character (char) and wide character (wchar_t) arrays, called yytext and yywtext, respectively. For example, you can use the following rule to print the matched string:
[a-z]+ printf("%s",yytext);
The C language printf
subroutine accepts a format argument and data to be printed. In this
example the arguments to the printf subroutine have the following
meanings:
The lex command defines ECHO; as a special action to print out the contents of yytext. For example, the following two rules are equilvalent:
[a-z]+ ECHO; [a-z]+ printf("%s",yytext);
You can change the representation
of yytext by using either %array or %pointer
in the definitions section of the lex specification file.
To find the number of characters
that the lexical analyzer matched for a particular extended regular
expression, use the yyleng or the yywleng external
variables.
To count both the number of words and the number of characters in words in the input, use the following action:
[a-zA-Z]+ {words++;chars += yyleng;}
This action totals the number of characters in the words matched and puts that number in chars.
The following expression finds the last character in the string matched:
yytext[yyleng-1]
The lex command partitions the input stream and does not search for all possible matches of each expression. Each character is accounted for only once. To override this choice and search for items that may overlap or include each other, use the REJECT directive. For example, to count all instance of she and he, including the instances of he that are included in she, use the following action:
she {s++; REJECT;} he {h++} \n | . ;
After counting the occurrences of she, the lex command rejects the input string and then counts the occurrences of he. Because he does not include she, a REJECT action is not necessary on he.
Normally, the next string from the input stream overwrites the current entry in the yytext array. If you use the yymore subroutine, the next string from the input stream is added to the end of the current entry in the yytext array.
For example, the following lexical analyser looks for strings:
%s instring %% <INITIAL>\" { /* start of string */ BEGIN instring; yymore(); } <instring>\" { /* end of string */ printf("matched %s\n", yytext); BEGIN INITIAL; } <instring>. { yymore(); } <instring>\n { printf("Error, new line in string\n"); BEGIN INITIAL; }
Even though a string may be recognized by matching several rules, repeated calls to the yymore subroutine ensure that the yytext array will contain the entire string.
To return characters to the input stream, use the call:
yyless(n)
where n is the number of characters of the current string to keep. Characters in the string beyond this number are returned to the input stream. The yyless subroutine provides the same type of look-ahead that the / (slash) operator uses, but it allows more control over its usage.
Use the yyless subroutine to process text more than once. For example, when parsing a C language program, an expression such asx=-a is difficult to understand. Does it mean x is equal to minus a, or is it an older representation of x -= a which means decrease x by the value of a? To treat this expression as x is equal to minus a, but print a warning message, use a rule such as:
=-[a-zA-Z] { printf("Operator (=-) ambiguous\n"); yyless(yyleng-1); ... action for = ... }
The lex program allows
a program to use the following input/output (I/O) subroutines:
lex provides these subroutines as macro definitions. The subroutines are coded in the lex.yy.c file. You can override them and provide other versions.
The winput, wunput, and woutput macros are defined to use the yywinput, yywunput, and yywoutput subroutines. For compatibility, the yy subroutines subsequently use the input, unput, and output subroutine to read, replace, and write the necessary number of bytes in a complete multibyte character.
These subroutines define the relationship between external files and internal characters. If you change the subroutines, change them all in the same way. They should follow these rules:
The lex.yy.c file allows the lexical analyzer to back up a maximum of 200 characters.
To read a file containing nulls, create a different version of the input subroutine. In the normal version of the input subroutine, the returned value of 0 (from the null characters) indicates the end of file and ends the input.
The lexical analyzers that the lex command generates process character I/O through the input, output, and unput subroutines. Therefore, to return values in the yytext subroutine, the lex command uses the character representation that these subroutines use. Internally however, the lex command represents each character with a small integer. When using the standard library, this integer is the value of the bit pattern the computer uses to represent the character. Normally, the letter 'a' is represented in the same form as the character constant 'a'. If you change this interpretation with different I/O subroutines, put a translation table in the definitions section of the specification file. The translation table begins and ends with lines that contain only the entries:
%T
The translation table contains additional lines that indicate the value associated with each character. For example:
%T {integer} {character string} {integer} {character string} {integer} {character string} %T
When the lexical analyzer reaches
the end of a file, it calls the yywrap library subroutine.
However, if the lexical analyzer receives input from more than one source, change the yywrap subroutine. The new function must get the new input and return a value of 0 to the lexical analyzer. A return value of 0 indicates the program should continue processing.
You can also include code to print summary reports and tables when the lexical analyzer ends in a new version of the yywrap subroutine. The yywrap subroutine is the only way to force the yylex subroutine to recognize the end of input.
The lex command passes C code, unchanged, to the lexical analyzer in the following circumstances:
You can define string macros that the lex program expands when it generates the lexical analyzer. Define them before the first %% delimiter in the lex specification file. Any line in this section that begins in column 1 and that does not lie between %{ and %} defines a lex substitution string. Substitution string definitions have the general format:
name translation
where name and translation are separated by at least one blank or tab, and the specified name begins with a letter. When the lex program finds the string defined by name enclosed in {} (braces) in the rules part of the specification file, it changes that name to the string defined in translation and deletes the braces.
For example, to define the names D and E, put the following definitions before the first %% delimiter in the specification file:
D [0-9] E [DEde][-+]{D}+
Then, use these names in the rules section of the specification file to make the rules shorter:
{D}+ printf("integer"); {D}+"."{D}*({E})? | {D}*"."{D}+({E})? | {D}+{E} printf("real");
You can also include the following items in the definitions section:
A rule may be associated with any start condition. However, the lex program recognizes the rule only when in that associated start condition. You can change the current start condition at any time.
Define start conditions in the definitions section of the specification file by using a line in the following form:
%Start name1 name2
where name1 and name2 define names that represent conditions. There is no limit to the number of conditions, and they can appear in any order. You can also shorten the word Start to s or S.
When using a start condition in the rules section of the specification file, enclose the name of the start condition in <> (less than, greater than) symbols at the beginning of the rule. The following example defines a rule, expression, that the lex program recognizes only when the lex program is in start condition name1:
<name1>expression
To put the lex program in a particular start condition, execute the action statement in the action part of a rule; for instance, BEGIN in the following line:
BEGIN name1;
This statement changes the start condition to name1.
To resume the normal state, enter:
BEGIN 0;
or
BEGIN INITIAL;
where INITIAL is defined to be 0 by the lex program. BEGIN 0; resets the lex program to its initial condition.
The lex program also supports exclusive start conditions specified with %x (percent sign, lowercase x) or %X (percent sign, uppercase X) operator followed by a list of exclusive start names in the same format as regular start conditions. Exclusive start conditions differ from regular start conditions in that rules that do not begin with a start condition are not active when the lexical analyzer is in an exclusive start state. For example:
%s one %x two %% abc {printf("matched ");ECHO;BEGIN one;} <one>def printf("matched ");ECHO;BEGIN two;} <two>ghi {printf("matched ");ECHO;BEGIN INITIAL;}
In start state one in the preceding example, both abc and def can be matched. In start state two, only ghi can be matched.
Compiling a lex program is a two-step process:
For example, if the lex specification file is called lextest, enter the following commands:
lex lextest cc lex.yy.c -ll
The lex library
contains the following subroutines:
Some of the lex subroutines can be substituted by user-supplied routines. For example, lex supports user-supplied versions of the main and yywrap subroutines. The library versions of these routines, provided as a base, are as follows:
main
#include <stdio.h> #include <locale.h> main() { setlocale(LC_ALL, ""); yylex(); exit(0); }
yywrap
yywrap() { return(1); }
yymore, yyless, and yyreject subroutines are available only through the lex library; however, these subroutines are required only when used in lex actions.
Chapter 1, Tools and Utilities
Using the lex Program with the yacc Program
Example Program for the lex and yacc Programs
lex command, yacc command, ed command, sed command
printf subroutine
|
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/aixprggd/genprogc/create_input_lang_lex_yacc.htm
|
CC-MAIN-2022-27
|
refinedweb
| 2,544
| 50.06
|
No replies to this topic
#1 Members - Reputation: 118
Posted 09 July 2012 - 03:20 AM
I have a created a windows game library to include the basic stuff of XNA so I don't have to write them again(like cameras). Every thing was working fine until I added a Map namespace with a MapReader class. every since then every class and namespace I add to the library flashes the error : Error 1 The type or namespace name 'namespace' does not exist in the namespace 'GameLibrary'. I do not understand the problem. Although there is a TextInput namespace that works fine.
|
http://www.gamedev.net/topic/627669-error-referencing-xna-library-project/#entry4957201
|
CC-MAIN-2016-40
|
refinedweb
| 102
| 68.3
|
Python's Mypy: Callables and Generators
Learn) <class 'function'>
Similarly, when you create a new class, you're adding a new object type to Python:
>>> class Foo(): ... pass >>> type(Foo) <class 'type'>
It's a pretty common paradigm in Python to write a function that, when it runs, defines and runs an inner function. This is also known as a "closure", and it has a few different uses. For example, you can write:
def foo(x): def bar(y): return f"In bar, {x} * {y} = {x*y}" return bar
You then can run:
b = foo(10) print(b(2))
And you'll get the following output:
In bar, 10 * 2 = 20
I don't want to dwell on how all of this works, including inner functions and Python's scoping rules. I do, however, want to ask the question "how can you use Mypy to check all of this?"
You could annotate both
x and
y as
int. And you can annotate the
return value from
bar as a string. But how can you annotate the return
value from
foo? Given that, as shown above, functions are of type
function, perhaps you can use that. But
function isn't actually a
recognized name in Python.
Instead, you'll need to use the
typing module, which comes with Python
3 so you can do this kind of type checking. And in
typing, the
name
Callable is defined for precisely this purpose. So you can write:
from typing import Callable def foo(x: int) -> Callable: def bar(y: int) -> str: return f"In bar, {x} * {y} = {x*y}" return bar b = foo(10) print(b(2))
Sure enough, this passes Mypy's checks. The function
foo returns
Callable, a description that includes both functions and classes.
But, wait a second. Maybe you don't only want to check that
foo returns
a
Callable. Maybe you also want to make sure that it returns a function that
takes an
int as an argument. To do that, you'll use square brackets
after the word
Callable, putting two elements in those brackets. The
first will be a list (in this case, a one-element list) of argument
types. The second element in the list will describe the return type from
the function. In other words, the code now will look like this:
#!/usr/bin/env python3 def foo(x: int) -> Callable[[int], str]: def bar(y: int) -> str: return f"In bar, {x} * {y} = {x*y}" return bar b = foo(10) print(b(2))
Generators
With all this talk of callables, you also should consider what happens
with generator functions. Python loves iteration and encourages you to
use
for loops wherever you can. In many cases, it's easiest to express
your iterator in the form of a function, known in the Python world as a
"generator function". For example, you can create a generator function
that returns the Fibonacci sequence as follows:
def fib(): first = 0 second = 1 while True: yield first first, second = second, first+second
You then can get the first 50 Fibonacci numbers as follows:
g = fib() for i in range(50): print(next(g))
That's great, but what if you want to add Mypy checking to your
fib
function? It would seem that you can just say that the return value is
an integer:
def fib() -> int: first = 0 second = 1 while True: yield first first, second = second, first+second
But if you try running this via Mypy, you get a pretty stern response:
atf201906b.py:4: error: The return type of a generator function should be "Generator" or one of its supertypes atf201906b.py:14: error: No overload variant of "next" matches argument type "int" atf201906b.py:14: note: Possible overload variant: atf201906b.py:14: note: def [_T] next(i: Iterator[_T]) -> _T atf201906b.py:14: note: <1 more non-matching overload not shown>
Whoa! What's going on?
Well, it's important to remember that the result of running a generator function is not whatever you're yielding with each iteration. Rather, the result is a generator object. The generator object, in turn, then yields a particular type with each iteration.
So what you really want to do is tell Mypy that
fib will return a
generator, and that with each iteration of the generator, you'll get an
integer. You would think that you could do it this way:
from typing import Generator def fib() -> Generator[int]: first = 0 second = 1 while True: yield first first, second = second, first+second
But if you try to run Mypy, you get the following:
atf201906b.py:6: error: "Generator" expects 3 type arguments, but 1 given
It turns out that the Generator type can (optionally) get arguments in square brackets. But if you provide any arguments, you must provide three:
- The type returned with each iteration—what you normally think about from iterators.
- The type that the generator will receive, if you invoke the
sendmethod on it.
- The type that will be returned when the generator exits altogether.
Since only the first of these is relevant in this program, you'll pass
None for each of the other values:
from typing import Generator def fib() -> Generator[int, None, None]: first = 0 second = 1 while True: yield first first, second = second, first+second
Sure enough, it now passes Mypy's tests.
Conclusion
You might think that Mypy isn't up to the task of dealing with complex typing problems, but it actually has been thought out rather well. And of course, what I've shown here (and in my previous two articles on Mypy) is just the beginning; the Mypy authors have solved all sorts of problems, from modules mutually referencing each others' types to aliasing long type descriptions.
If you're thinking of tightening up your organization's code, adding type checking via Mypy is a great way to go. A growing number of organizations are adding its checks, little by little, and are enjoying something that dynamic-language advocates have long ignored, namely that if the computer can check what types you're using, your programs actually might run more smoothly.
Resources
You can read more about Mypy here. That site
has documentation, tutorials and even information for people using
Python 2 who want to introduce
mypy via comments (rather than
annotations).
You can read more about the origins of type annotations in Python, and how to use them, in PEP (Python enhancement proposal) 484, available online here.
|
https://www.linuxjournal.com/content/pythons-mypy-callables-and-generators
|
CC-MAIN-2020-29
|
refinedweb
| 1,081
| 67.59
|
Web::Simple - A quick and easy way to build simple web applications
#!/usr/bin/env perl package HelloWorld; use Web::Simple; sub dispatch_request { sub (GET) { [ 200, [ 'Content-type', 'text/plain' ], [ 'Hello world!' ] ] }, sub () { [ 405, [ 'Content-type', 'text/plain' ], [ 'Method not allowed' ] ] } } HelloWorld->run_if_script;
If you save this file into your cgi-bin as
hello-world.cgi and then visit:
you'll get the "Hello world!" string output to your browser. At the same time this file will also act as a class module, so you can save it as HelloWorld.pm and use it as-is in test scripts or other deployment mechanisms.
Note that you should retain the ->run_if_script even if your app is a module, since this additionally makes it valid as a .psgi file, which can be extremely useful during development.
For more complex examples and non-CGI deployment, see Web::Simple::Deployment. To get help with Web::Simple, please connect to the irc.perl.org IRC network and join #web-simple.
The philosophy of Web::Simple is to keep to an absolute bare minimum for everything. It is not designed to be used for large scale applications; the Catalyst web framework already works very nicely for that and is a far more mature, well supported piece of software.
However, if you have an application that only does a couple of things, and want to not have to think about complexities of deployment, then Web::Simple might be just the thing for you.
The only public interface the Web::Simple module itself provides is an
import based one:
use Web::Simple 'NameOfApplication';
This sets up your package (in this case "NameOfApplication" is your package) so that it inherits from Web::Simple::Application and imports strictures, as well as installs a
PSGI_ENV constant for convenience, as well as some other subroutines.
Importing strictures will automatically make your code use the
strict and
warnings pragma, so you can skip the usual:
use strict; use warnings FATAL => 'aa';
provided you 'use Web::Simple' at the top of the file. Note that we turn on *fatal* warnings so if you have any warnings at any point from the file that you did 'use Web::Simple' in, then your application will die. This is, so far, considered a feature.
When we inherit from Web::Simple::Application we also use Moo, which is the the equivalent of:
{ package NameOfApplication; use Moo; extends 'Web::Simple::Application'; }
So you can use Moo features in your application, such as creating attributes using the
has subroutine, etc. Please see the documentation for Moo for more information.
It also exports the following subroutines for use in dispatchers:
response_filter { ... }; redispatch_to '/somewhere';
Finally, import sets
$INC{"NameOfApplication.pm"} = 'Set by "use Web::Simple;" invocation';
so that perl will not attempt to load the application again even if
require NameOfApplication;
is encountered in other code.
One important thing to remember when using
NameOfApplication->run_if_script;
At the end of your app is that this call will create an instance of your app for you automatically, regardless of context. An easier way to think of this would be if the method were more verbosely named
NameOfApplication->run_request_if_script_else_turn_coderef_for_psgi;
Web::Simple despite being straightforward to use, has a powerful system for matching all sorts of incoming URLs to one or more subroutines. These subroutines can be simple actions to take for a given URL, or something more complicated, including entire Plack applications, Plack::Middleware and nested subdispatchers.
sub dispatch_request { # matches: GET /user/1.htm?show_details=1 # GET /user/1.htm sub (GET + /user/* + ?show_details~ + .htm|.html|.xhtml) { my ($self, $user_id, $show_details) = @_; ... }, # matches: POST /user?username=frew # POST /user?username=mst&first_name=matt&last_name=trout sub (POST + /user + ?username=&*) { my ($self, $username, $misc_params) = @_; ... }, # matches: DELETE /user/1/friend/2 sub (DELETE + /user/*/friend/*) { my ($self, $user_id, $friend_id) = @_; ... }, # matches: PUT /user/1?first_name=Matt&last_name=Trout sub (PUT + /user/* + ?first_name~&last_name~) { my ($self, $user_id, $first_name, $last_name) = @_; ... }, sub (/user/*/...) { my $user_id = $_[1]; # matches: PUT /user/1/role/1 sub (PUT + /role/*) { my $role_id = $_[1]; ... }, # matches: DELETE /user/1/role/1 sub (DELETE + /role/*) { my $role_id = $_[1]; ... }, }, }
At the beginning of a request, your app's dispatch_request method is called with the PSGI $env as an argument. You can handle the request entirely in here and return a PSGI response arrayref if you want:
sub dispatch_request { my ($self, $env) = @_; [ 404, [ 'Content-type' => 'text/plain' ], [ 'Amnesia == fail' ] ] }
However, generally, instead of that, you return a set of dispatch subs:
sub dispatch_request { my $self = shift; sub (/) { redispatch_to '/index.html' }, sub (/user/*) { $self->show_user($_[1]) }, ... }
Well, a sub is a valid PSGI response too (for ultimate streaming and async cleverness). If you want to return a PSGI sub you have to wrap it into an array ref.
sub dispatch_request { [ sub { my $respond = shift; # This is pure PSGI here, so read perldoc PSGI } ] }
If you return a subroutine with a prototype, the prototype is treated as a match specification - and if the test is passed, the body of the sub is called as a method and passed any matched arguments (see below for more details).
You can also return a plain subroutine which will be called with just
$env - remember that in this case if you need
$self you must close over it.
If you return a normal object, Web::Simple will simply return it upwards on the assumption that a response_filter (or some arbitrary Plack::Middleware) somewhere will convert it to something useful. This allows:
sub dispatch_request { my $self = shift;.
to render a user object to HTML, if there is an incoming URL such as:
This works because as we descend down the dispachers, we first match
sub (.html), which adds a
response_filter (basically a specialized routine that follows the Plack::Middleware specification), and then later we also match
sub (/user/*) which gets a user and returns that as the response. This user object 'bubbles up' through all the wrapping middleware until it hits the
response_filter we defined, after which the return is converted to a true html response.
However, two types of objects are treated specially - a
Plack::Component object will have its
to_app method called and be used as a dispatcher:
sub dispatch_request { my $self = shift;) { ## something else that needs a session }, } }
And that's it - but remember that all this happens recursively - it's dispatchers all the way down. A URL incoming pattern will run all matching dispatchers and then hit all added filters or Plack::Middleware.
sub (GET) {
A match specification beginning with a capital letter matches HTTP requests with that request method.
sub (/login) {
A match specification beginning with a / is a path match. In the simplest case it matches a specific path. To match a path with a wildcard part, you can do:
sub (/user/*) { $self->handle_user($_[1])
This will match /user/<anything> where <anything> does not include a literal / character. The matched part becomes part of the match arguments. You can also match more than one part:
sub (/user/*/*) { my ($self, $user_1, $user_2) = @_; sub (/domain/*/user/*) { my ($self, $domain, $user) = @_;
and so on. To match an arbitrary number of parts, use
**:
sub (/page/**) { my ($self, $match) = @_;
This will result in a single element for the entire match. Note that you can do
sub (/page/**/edit) {
to match an arbitrary number of parts up to but not including some final part.
Note: Since Web::Simple handles a concept of file extensions,
* and
** matchers will not by default match things after a final dot, and this can be modified by using
*.* and
**.* in the final position, e.g.:
/one/* matches /one/two.three and captures "two" /one/*.* matches /one/two.three and captures "two.three" /** matches /one/two.three and captures "one/two" /**.* matches /one/two.three and captures "one/two.three"
Finally,
sub (/foo/...) {
Will match
/foo/ on the beginning of the path and strip it. This is designed to be used to construct nested dispatch structures, but can also prove useful for having e.g. an optional language specification at the start of a path.
Note that the '...' is a "maybe something here, maybe not" so the above specification will match like this:
/foo # no match /foo/ # match and strip path to '/' /foo/bar/baz # match and strip path to '/bar/baz'
Almost the same,
sub (/foo...) {
Will match on
/foo/bar/baz, but also include
/foo. Otherwise it operates the same way as
/foo/....
/foo # match and strip path to '' /foo/ # match and strip path to '/' /foo/bar/baz # match and strip path to '/bar/baz'
Please note the difference between
sub(/foo/...) and
sub(/foo...). In the first case, this is expecting to find something after
/foo (and fails to match if nothing is found), while in the second case we can match both
/foo and
/foo/more/to/come. The following are roughly the same:(/foo...), trailing slashes in path specs are significant. This is intentional and necessary to retain the ability to use relative links on websites. Let's demonstrate on this link:
<a href="bar">bar</a>
If the user loads the url
/foo/ and clicks on this link, they will be sent to
/foo/bar. However when they are on the url
/foo and click this link, then they will be sent to
/bar.
This makes it necessary to be explicit about the trailing slash. spec>) { # match body params
The body spec will match if the request content is either application/x-www-form-urlencoded or multipart/form-data - the latter of which is required for uploads - see below.
The param spec is elements of one of the following forms:
param~ # optional parameter param= # required parameter @param~ # optional multiple parameter @param= # required multiple parameter :param~ # optional parameter in hashref :param= # required parameter in hashref :@param~ # optional multiple in hashref :@param= # required multiple in hashref * # include all other parameters in hashref @* # include all other parameters as multiple in hashref
separated by the
& character. The arguments added to the request are one per non-
:/
* parameter (scalar for normal, arrayref for multiple), plus if any
:/
* specs exist a hashref containing those values.
Please note that if you specify a multiple type parameter match, you are ensured of getting an arrayref for the value, EVEN if the current incoming request has only one value. However if a parameter is specified as single and multiple values are found, the last one will be used.
For example to match a
page parameter with an optional
order_by parameter one would write:
sub (?page=&order_by~) { my ($self, $page, $order_by) = @_; return unless $page =~ /^\d+$/; $page ||= 'id'; response_filter { $_[1]->search_rs({}, $p); } }
to implement paging and ordering against a DBIx::Class::ResultSet object.
Another Example: To get all parameters as a hashref of arrayrefs, write:
sub(?@*) { my ($self, $params) = @_; ...
To get two parameters as a hashref, write:
sub(?:user~&:domain~) { my ($self, $params) = @_; # params contains only 'user' and 'domain' keys
You can also mix these, so:
sub (?foo=&@bar~&:coffee=&@*) { my ($self, $foo, $bar, $params);
where $bar is an arrayref (possibly an empty one), and $params contains arrayref values for all parameters not mentioned and a scalar value for the 'coffee' parameter.
Note, in the case where you combine arrayref, single parameter and named hashref style, the arrayref and single parameters will appear in
@_ in the order you defined them in the protoype, but all hashrefs will merge into a single
$params, as in the example above.
sub (*foo=) { # param specifier can be anything valid for query or body
The upload match system functions exactly like a query/body match, except that the values returned (if any) are
Web::Dispatch::Upload objects.
Note that this match type will succeed in two circumstances where you might not expect it to - first, when the field exists but is not an upload field and second, when the field exists but the form is not an upload form (i.e. content type "application/x-www-form-urlencoded" rather than "multipart/form-data"). In either of these cases, what you'll get back is a
Web::Dispatch::NotAnUpload object, which will
die with an error pointing out the problem if you try and use it. To be sure you have a real upload object, call
$upload->is_upload # returns 1 on a valid upload, 0 on a non-upload field
and to get the reason why such an object is not an upload, call
$upload->reason # returns a reason or '' on a valid upload.
Other than these two methods, the upload object provides the same interface as Plack::Request::Upload with the addition of a stringify to the temporary filename to make copying it somewhere else easier to handle.
Matches may be combined with the + character - e.g.
sub (GET + /user/*) {
to create an AND match. They may also be combined withe the | character - e.g.
sub (GET|POST) {
to create an OR match. Matches can be nested with () - e.g.
sub ((GET|POST) + /user/*) {
and negated with ! - e.g.
sub (!/user/foo + /user/*) {
! binds to the immediate rightmost match specification, so if you want to negate a combination you will need to use
sub ( !(POST|PUT|DELETE) ) {
and | binds tighter than +, so
sub ((GET|POST) + /user/*) {
and
sub (GET|POST + /user/*) {
are equivalent, but
sub ((GET + .
response_filter { # Hide errors from the user because we hates them, preciousss if (ref($_[0]) eq 'ARRAY' && $_[0]->[0] == 500) { $_[0] = [ 200, @{$_[0]}[1..$#{$_[0]}] ]; } return $_[0]; };
The response_filter subroutine is designed for use inside dispatch subroutines.
It creates and returns a special dispatcher that always matches, and calls the block passed to it as a filter on the result of running the rest of the current dispatch chain.
Thus the filter above runs further dispatch as normal, but if the result of dispatch is a 500 (Internal Server Error) response, changes this to a 200 (OK) response without altering the headers or body.
redispatch_to '/other/url';
The redispatch_to subroutine is designed for use inside dispatch subroutines.
It creates and returns a special dispatcher that always matches, and instead of continuing dispatch re-delegates it to the start of the dispatch process, but with the path of the request altered to the supplied URL.
Thus if you receive a POST to
/some/url and return a redispatch to
/other/url, the dispatch behaviour will be exactly as if the same POST request had been made to
/other/url instead.
Note, this is not the same as returning an HTTP 3xx redirect as a response; rather it is a much more efficient internal process..
Web::Simple was originally written to form part of my Antiquated Perl talk for Italian Perl Workshop 2009, but in writing the bloggery example I realised that having a bare minimum system for writing web applications that doesn't drive me insane was rather nice and decided to spend my attempt at nanowrimo for 2009 improving and documenting it to the point where others could use it.
The Antiquated Perl talk can be found at and the slides are reproduced in this distribution under Web::Simple::AntiquatedPerl.
irc.perl.org #web-simple
Because mst's non-work email is a bombsite so he'd never read it anyway.
Gitweb is on and the clone URL is:
git clone git://git.shadowcat.co.uk/catagits/Web-Simple.git
Matt S. Trout (mst) <mst@shadowcat.co.uk>
Devin Austin (dhoss) <dhoss@cpan.org>
Arthur Axel 'fREW' Schmidt <frioux@gmail.com>
gregor herrmann (gregoa) <gregoa@debian.org>
John Napiorkowski (jnap) <jjn1056@yahoo.com>
Josh McMichael <jmcmicha@linus222.gsc.wustl.edu>
Justin Hunter (arcanez) <justin.d.hunter@gmail.com>
Kjetil Kjernsmo <kjetil@kjernsmo.net>
markie <markie@nulletch64.dreamhost.com>
Christian Walde (Mithaldu) <walde.christian@googlemail.com>
nperez <nperez@cpan.org>
Robin Edwards <robin.ge@gmail.com>
Andrew Rodland (hobbs) <andrew@cleverdomain.org>
Robert Sedlacek (phaylon) <r.sedlacek@shadowcat.co.uk>
Hakim Cassimally (osfameron) <osfameron@cpan.org>
Karen Etheridge (ether) <ether@cpan.org>
Copyright (c) 2011 the Web::Simple "AUTHOR" and "CONTRIBUTORS" as listed above.
This library is free software and may be distributed under the same terms as perl itself.
|
http://search.cpan.org/~mstrout/Web-Simple-0.023/lib/Web/Simple.pm
|
CC-MAIN-2014-42
|
refinedweb
| 2,683
| 62.88
|
From: Joaquín Mª López Muñoz (joaquin_at_[hidden])
Date: 2005-02-17 07:13:45
Hi David, thanks for trying Boost.MultiIndex!
David Gruener ha escrito:
> Hello,
>
> i just started using the boost::multi_index_container and ran into some
> trouble. I'm using boost version 1.32.
> Well, i tried a slightly modified version of Example 6 ("complex searches
> and foreign keys"). This is the code:
[...]
> As one might see, the change i made is that the key get_name() of the
> car_manufacturer now comes from a base class called person.
> However, this code wont compile because the compiler argues that there is no
> '*' operator for car_manufactuerer. I found out that the problem is
> the handling with the so called chained pointers in the mem_fun structs.
>
> template<typename ChainedPtr>
> Type operator()(const ChainedPtr& x)const // [1]
> {
> return operator()(*x);
> }
>
> Type operator()(const Class& x)const // [2]
> {
> return (x.*PtrToMemberFunction)();
> }
>
> In the example the compiler calls [1] for car_manufacturer*, which seems
> right. After that, it can chose (among other methods) between [1] and [2]
> (Class = person) with argument type car_manufacturer and choses the better
> matching [1] with the followed error.
Yes, your analysis of the problem is correct.
>
> However, maybe things could be done better here. The mem_fun structs
> could be smart enogh to deal with classes derived from template argument
> Class. That means [2] should be called in the case of operator() with a
> "Class"-derived argument.
> I dont know much about metaprogramming, but the following diff
> for const_mem_fun (as an example) of file mem_fun.hpp seems to work
> with a compiler supporting full template specialisation at class scope.
> Unfortunately gcc up to 3.4 doesn't seem to support this, while icc 8.0
> does. I'm pretty sure there is a better sollution than mine, assumed
> that a "problem" even exists, which is my question to you. :]
>
[...]
> ------------------------------------------------------------------------------
>
> Would some type handling like this make sense?
Yep, it would make sense. I guess it can be probably simplified,
so as to not rely in full template specialisation at class scope.
Anyway, please keep reading.
>
> If not, whats the best way to use dervied member function
> of a (pointer)member of the container type as key?
This is an infortunate problem with pointers to members as template
arguments. The expression
const_mem_fun<Derived,int,&Base::get>
is not valid even though &Base::get is really an int (Derived::*)const().
However, there's a way out using the alternative
const_mem_fun_explicit. See the attached example. Funny thing is
that const_mem_fun_explicit was never meant to solve this kind of
problems :)
>
> If yes, are there similar issues in other extractors like
> member extractor?
member<> has the very same problem, and in this case there's no
alternative member_explicit<>. You can workaround this with a little
more typing than desireable. See the attached code for an example.
I guess I'll have to consider how to best approach the problem. I'm
reluctant to change the extractors as they
are relatively fragile for buggy compilers (MSVC++ 6.0) but will
think it over. Hope the attached workarounds serve your needs
in the meantime.
|
https://lists.boost.org/Archives/boost/2005/02/80594.php
|
CC-MAIN-2021-04
|
refinedweb
| 514
| 58.28
|
business.
--Jonathan Robie on the xml-dev mailing list
the historical fact is that XML 1.0 deliberately chose to force all information content of whatever kind into a textual representation based on the evidence that this pays off well in terms of interoperability.
--Tim Bray on the xml-dev mailing list
I think this was all part of a conspiracy for Chinese to catch up with Japanese, since the Chinese code pages (until now) didn't have a mess the scale of SJIS. But between HKSCS and GB 18030, they are making up for lost time.
--Kenneth Whistler on the unicode mailing list.
--Rick Jelliffe on the xml-dev mailing list
XML's original requirement of compatibility with SGML has served its purpose. At this point SGML, if it is to survive, needs to worry about compatibility with XML.
--Joe English on the xml-dev mailing list
XML is *text*. It is made from *characters*, and arbitrary binary strings have no place in it. Once you change that, you have essentially ruined XML as a textual markup language.
People could say that NUL et al. are still *characters* and so would be fine, even in UTF-8 encoded documents, but I bet they'd be rather unhappy to find their binary streams changing if I saved the document as UTF-16.
--Gavin Thomas Nicol on the xml-dev mailing list XPath/XQuery/XSLT/SAX/DOM/RDF/godonlyknowswhat. THAT's the real power of XML as an object serialization format, and this totally overwhelms its limitations ... at least today. If someday there are cheap, ubiquitous ASN.1 tools for parsing, transformation, manipulation, display, and querying, then this advantage of XML goes away, and we'll be arguing about this on ASN-DEV or whatever.
--Mike Champion on the xml-dev mailing list.
--Derek Denny-Brown on the xml-dev mailing list
XML's original requirement of compatibility with SGML has served its purpose. At this point SGML, if it is to survive, needs to worry about compatibility with XML.
--Joe English on the xml-dev mailing list
Just because it comes from Microsoft, it's not necessarily bad.
--James Clark
Read the rest in XML.com: Clark Challenges the XML Community.
--Joel Spolsky
Read the rest in Joel on Software - Working on CityDesk, Part Three
XML is about as close as you can get to the *opposite* of O-O thinking. The O-O paradigm is that objects are nicely packaged opaque bundles of code & data that do things through carefully designed & presented interface, and you're not supposed to bother your pretty little head about what's happening inside.
A chunk of XML on the other hand perforce exposes all its internal structure and does precisely nothing.
--Tim Bray on the xml-dev mailing list
OO is good because it does data hiding, which is what you need when the data is owned by one application. XML is good because it doesn't do data hiding, which is what you need in order to communicate data beween multiple applications.
OO and traditional database technology are focused on information storage; XML is focused on information interchange: hence the difference.
--Michael Kay on the xml-dev mailing list.
--Richard Dawkins
Read the rest in Guardian Unlimited | Archive Search
You may have noticed that almost every edit box on the Macintosh uses a
fat, wide, bold fontcalled".
--Joel Spolsky
Read the rest in User Interface Design for Programmers
I!
--Rick Jelliffe on the xml-dev mailing list
A good standard needs to rot on the line for awhile, just like a game bird. We made up this myth called Internet Time and used it to muscle other groups and works off the line, only to discover that our own groups and works are every bit as flawed and made worse because they didn't spend enough time rotting in the wind before being cut down for basting. Some people think the revolution is over, killed by BigCos, lawyers, the music industry, and so on. In fact, the normal damping controls kicked in about on time. I think the real revolution is just starting and most of what has happened for the last ten years was staging. This revolution is about communication. Like a performing band, it takes a lot of practice before even very skilled players can improvise in real time.
--Claude L (Len) Bullard on the xml-dev@lists.xml.org mailing�XML.
--Joel Spolsky
Read the rest in Joel on Software - Working on CityDesk
Not only is it ethically unreasonable to maintain the delusion that you can do anything serious on the Net in English only, it's also damn bad for business.
--Tim Bray
Read the rest in XML.com: Practical Internationalization
This is a bit of a chronic problem in the W3C (and other bodies that do part of their business in public, part in private for all I know). Certain unarticulated assumptions are discussed internally or taken for granted because of previous work until they become part of the fabric of the organization's being. The "attributes have no defined order" meme is a fairly trivial example. The "PSVI as the foundation for the next generation of XML" is a more serious one. We saw last summer in the great namespace URI debate what happens when someone innocently falls afoul of a revealed truth that never quite got written down in an authoritative manner. For that matter, there's now a "don't touch the namespace URI question with a 10 foot pole" meme that is also not written down anywhere
.
--Mike Champion on the xml-dev mailing list.
--Jakob Nielsen
Read the rest in Beyond Accessibility: Treating Users with Dis...�even after the suicide attack last October on the USS Cole, in the port of Aden�that.
--Reuel Marc Gerecht
Read the rest in The Atlantic | July/August 2001 | The Counterterrorist Myth
What frustrates me is that the well-understood principles of "intelligent design" are the same as those that contribute to evolutionary survival -- simplicity, modularity, re-usability, etc. Conversely, if it's hard to understand, it will be hard to build; if it's hard to build, it will break; if it breaks, it won't survive. SGML, for all the great ideas buried in there somewhere, lived and died (OK, it failed to thrive, don't flame me!) on a very common and predictable trajectory.
--Mike Champion on the xml-dev@ mailing list.
--Alan Cooper
Read the rest in Alan Cooper of Cooper Interaction Design sees...
Yahoo maps always assume that I want to drive, buy gasoline, pollute, and park ... rather than use Caltrain, Muni, BART, or any of the other more globally cost-effective transport systems.
--David Brownell on the xml-dev mailing list
Talking about the technical superiority of Unix is not going to cut it anymore. 10 years ago, Unix was so much better than Windows; and still Windows gained market share in both the workstation and server space. Today's Windows is infinitely better and Unix has an even less chance to win on technical merits alone. Unix reminds me of an army that has technical superiority but continues to lose against an "inferior enemy".
--Vikram Kulkarni, on the WWWAC mailing list
After three years of W3C-watching, the thrill, as they say, is gone, and waiting for the next spec to drop out of the machine has definitely lost its appeal.
--Edd Dumbill
Read the rest in XML.com: XML You Can Touch [Oct. 10, 2001]
During the Vietnam War, organizing a nationwide peace movement took years. During the Gulf War, months. This time, it's taken days.
--Jeffrey Benner
Read the rest in Give Peace a Website.
--Mike Champion on the xml-dev mailing list
Often technologies are replaced by newer "disruptive technologies" that don't work all that well at first, but nonetheless solve a problem or expand a market. So, for example, many of you may remember how GUI designers used to scoff at web-based applications, which clearly were far cruder than native GUI applications--but became the dominant new development paradigm nonetheless. In fact, you might go so far as to look at emerging technologies that are disparaged by the mainstream. And I'm not just talking about really stale stuff. Perl is still a great technology, but one good way to know that Python and then PHP were going to be hot was the amount of energy the Perl community spent talking about why they didn't work as well as Perl. PHP in particular was a competitor that took a key part of Perl's market, simplified it, and reached new users.
--Tim O'Reilly on the "Computer Book Publishing" mailing list
All around me I see XML that is as proprietary to particular vendors as their native "binary" notations where. I see the open systems *spirit* that is implicit in XML jettisoned while the *syntax* of XML - the only thing explicit in the standard - is used to create new proprietary notations. At this rate XML will never be "the new ASCII". But it stands a very good chance of being "the new RTF".
--Sean McGrath on the xml-dev mailing list
Netscape 6.2 = Mozilla 0.9.4 + AOL advertising + AIM
--Scott Granneman on the WWWAC mailing list
One of the good things about the Windows monopoly is that I can help just about any Windows user with a whole host of problems over the phone, because I'm pretty sure what their screen looks like. Absent some kind of remote control software (I do use VNC with clients for lots of things), multiple Linux desktops will make this kind of thing impossible, and either narrow your range of support options to the people who support KDE desktop or GNOME desktop or whatever, or require the use of remote control software (we will ignore for now Linux's advantages in terms of remote administration, since I'm speaking only of end user questions, not administering the machine). A plethora of UI's is not going to be that good for end users, I don't think. I firmly believe that too much choice is a bad, bad thing.
--David W. Fenton on the WWWAC mailing list
XML always was very dull. SGML is mind numbing. It's more fun to write code to solve problems that had been solved thousands of times before, adding new tweaks every time to make it necessary to write even more code whenever the systems have to interoperate. But for some reason, the greedy fools who run businesses don't WANT to pay us geeks to do this, so they make us use simple, standardized grammars, parsers, APIs, transformation engines, constraint satisfaction validators, etc. when megabytes of custom code would work slightly better sometimes.
Worse yet, this stuff *is* infesting the infrastructures of pizza chains and other such companies. It's just so utterly bor-ING that nobody talks about it. Re-writing an order entry application for every mobile device would be more interesting: we could learn arcane details of soon-to-be-obsolete environments and entertain each other with parties about the fascinating differences between the screen display routines of Nokia and Motorla phones, or Pocket PC and Palm PDAs. But no, the damned XML people want to put the same old XHTML or WML rot everywhere.
--Mike Champion on the xml-dev mailing list
What's in a name? A rose by any other name could still be an infringement on rose.com if we get a sufficiently clueless judge.
--Michael Swaine
Read the rest in WebReview.com: April 27, 2001: Swaine's Frames: IPgrams for New Cynics.
--Benjamin Franz on the XML-DEV mailing list
"RAND" means "Lets talk about patents later, and I promise we won't single you out to get screwed".
--Wayne Steele on the xml-dev mailing list
It is unforgiveable that XLink, which shouldn't have been hard to specify, took so many years to get out the door. Not an XML problem, a politics/people/process problem.
--Tim Bray on the xml-dev mailing list
MS erred on the side of delivering MSXML4 a little late (at least, there were buzzes that it would come out 2 or 3 weeks before it did), which I think is pretty responsible of the people involved. I think the delivery of MSXML 4 is a quiet milestone for XML: if it is well-behaved, efficient, free, conformant.
--Rick Jelliffe on the schematron-love-in mailing list
RDDL answers the question "what is at the other end of the namespace URI?" with the answer "a standard assortment of resources provided by the controller of the namespace."
--Jonathan Borden on the xml-dev mailing list
The combination of DOM, namespaces, and XPath is likely to sell quite a bit of Prozac in years to come, I fear.
--Mike Champion on the xml-dev mailing list
In the InfoSet, ID-ness is a property of the attribute information item, it's not a property of the attribute declaration, in fact the attribute declaration isn't even in the InfoSet. XSLT transforms a tree to a tree, and the idea is that the tree reflects the InfoSet as closely as we can make it. But if the result tree doesn't contain attribute declarations, but does contain ID-ness as a property of the attribute instance, then we're going to end up with a tree that can't be serialized to XML, whether in streaming mode or otherwise. You can't parse an XML document to an InfoSet, make arbitrary changes to properties of objects in the InfoSet, and then serialize back to XML.
--Michael Kay on the xml-dev mailing list
>
--David Carlisle on the xsl-list mailing list
The case against internal subsets is compelling IMHO. They don't round trip. This creates significant problems for software developers working with hetrogenous, loosely coupled XML processing systems. i.e. anything other than XML "viewers".
--Sean McGrath on the xml-dev mailing list
I've been in software for 20 years and I've seen lots of interoperable cross-platform syntax and very rarely an interoperable cross-platform data structure or API. Obviously, once you're dealing with some XML inside of a program, you think in terms of the structure. But XML's interoperability is strongly linked to the fact that its definition is syntactic.
--Tim Bray on the XML DEV mailing list?
--Mike Champion on the xml-dev mailing list
Actually, there is still a LOT of data in hierarchical databases, and some new development. Nothing that works well is ever wiped out. I envision a world with both relational and native XML databases - together with hierarchical ones.
--Jonathan Robie on the xml-dev mailing list
SAX is anything but obvious to the programmers I've worked with, even programmers with extensive GUI experience (people who have actually built a GUI framework don't have any problem). And even after being pointed to SAX they don't always have much of an idea of how to proceed. This isn't entirely their fault. We have nice frameworks for dealing with events generated by GUIs. With SAX there is no such thing, that I'm aware of. The developer is faced with a stream of events and no framework for dealing with them.
--Bob Hutchison on the xml-dev mailing list
I was helping out on a very early-stage sales call last week and the potential customer sketched out a use case that was essentially persistent object serialization; XML was simply a convenient intermediate format, not anything's native language. I innocently -- ignoring the sales guy kicking me under the table :~) -- asked if they had looked at an OODBMS solution, even mentioning The Company Formerly Known as Object Design. As it turns out, they had briefly considered an OODBMS solution, but it was unworkable because of all the heterogeneity among the systems in the back office -- Java versions, evolving versions of the classes being serialized, 3rd-party products that they didn't control the source code to, the likelihood that even more cooks would be arriving in the kitchen soon, etc. So, an OODBMS was seen as inadequate, an RDBMS massive overkill, but an XML DBMS might be just the right thing in the middle.
--Mike Champion on the xml-dev mailing list
Microsoft wants you to believe that its commitment to XML means that you'll be able to share .Net-based information across dissimilar platforms. Hogwash. All XML amounts to is a standard way of pointing to things. XML doesn't have anything to say about whether the things it points to also conform to standards.
A perfectly standard XML file can say, "This thing is a title, this other thing is a menu, and this last thing is an ActiveX component." If your platform doesn't support ActiveX components, that's too bad. Since it's a foregone conclusion that Microsoft will be littering its XML with pointers to Win32-based components, the best that can be said about its adoption of XML is that it will make it easier for browsers and applications on non-Windows platforms to understand which parts of the document it must ignore.
If Microsoft was genuinely interested in XML as a means to greater interoperability, it would guarantee that its Office applications and .Net development tools would produce XML files that never point to Win32-specific components. Instead, whenever XML files point to active content, such as an executable component, that executable content should be platform-neutral. And we all know what that means, folks: Java, the environment Microsoft is dropping from future versions of Windows.
--Nicholas Petreley
Read the rest in Debunking Microsoft | Computerworld News & Features
I used to attend many, many conferences in addition to which I would pester the speakers for hours. This was a lot of fun in the early computing days; the folks at hobbyist cons really knew their stuff. Lately, conferences are not so fun. There are a lot of sales droids and arrogant business people spouting superficial knowledge (not all business people are arrogant, but a lot show up at seminars). You have to search for those who are knowledgable and generous with their knowledge. There are still some good shows (e.g., Perl/Linux) but I haven't been traveling lately. If you are starting out in technology writing, carefully chosen seminars can be very valuable.
--Julie Petersen on the "Computer Book Publishing" mailing list.
--Dan Kohn on the tidbits mailing list
As an analogy, take English and Danish. They have almost identical alphabets but are nevertheless different languages. An alphabet is a limited set of characters that can represent an unlimited number of words through recombination. XML is an alphabet, not a language. It provides the primitives for describing larger concepts, and it works by allowing an unlimited number of semantic concepts to be encoded using those primitives. Any XML parser should be able to declare any given XML document structurally valid -- analogously to the way native speakers can tell if a word is or isn't part of their native tongue -- but that says nothing about whether the contents of that document will be comprehensible to the recipient.
--Clay Shirky
Read the rest in XML.com: Web Services: It's So Crazy, It Just Might Not Work
None of the XML databases on the market can really claim that they support standards (including dbXML) because there are no well established standards evolving for XML databases. The goal of the XML:DB initiative is to start addressing these common issues between XML Databases where their requirements don't fall within the charter of the W3C... Though the charter of the W3C seems to be organically spreading into domains it had never originally been drafted, and has absolutely no business, but that's a whole other rant.
--Tom Bradford on the xml-dev mailing list
Clearly, what the consumer wants to do -- and has done now for many decades -- is buy recorded music and have the ability to make copies. It's been very clear that making compilation tapes, sharing tapes with friends, turning on your friends to new bits of music actually has propelled the growth of the industry. To view the simple act of recording as the enemy is really missing the boat
--Chris Gorog, chief executive of Roxio
Read the rest in Whatever happened to fair use? (10/31/2001)
Maybe the world should have tried harder to figure out what HyTime had to say before jumping on HTML and XML. BUT "worse is better" wins just about every time, deal with it! As I age, I get increasingly annoyed that natural selection choose all sorts of "worse is better" designs for my joints, muscles, neurons, etc.... and there's about as much point in hoping for the world to repeal the 80/20 rule as there is of hoping for eternal youth.
--Mike Champion on the xml-dev mailing list
the most important right that a user of Open Source software has stems from the fact that if you have $$$, there are people out there who will make the program do whatever you need.
--John Cowan on the xml-dev mailing list
Per IETF dogma, the XML spec and the RFC both say that the charset header is authoritative. Well, yes, except when it isn't. Software that ignores it when it's demonstrably wrong is hard to get too angry at.
--Tim Bray on the xml-dev mailing list
we also need to learn the lessons from what has happened on the web. The idea of having forgiving tools sounds great in theory, but it encourages sloppy practices on the part of developers. It is because of "forgiving" browsers that we have a web populated with malformed HTML content. We should try not to repeat that mistake with XML.
--Michael Brennan on the xml-dev mailing list
site developers can test for browser capabilities using code that silently attempts "testing" the things the site needs to do. Using a useragent string to determine capabilities is horribly unreliable, but just reliable enough that it makes you think that your site is working and makes it impossible to figure out why when it breaks ("What?!? You mean that version X of Browser Y claims to be running on a PC when it is running on a Macintosh?!?" -- true story). I have been burned too many times; never trust the useragent string. People spoof it, and vendors screw it up.
--Joshua Allen on the XML-DEV mailing list need a different program to access it.
The "best viewed with" button is bad, but there is worse. Worse are sites which not only ask you but which force you to use software which they control, so they will effectively have control over all your browsing -- even when you are browsing someone else's site. You press "search" the Web and there you are straight back to old site - not just reading it, but feeding it your personal interests, and being fed back its advertising, and its answers on where you should buy things, and what your should read for news and political opinion.
--Tim Berners-Lee
Read the rest in SiliconValley.com - Dan Gillmor's eJournal.
--Tim Bray on the ietf-xml-mime mailing list
Come on, guys. We should avoid being sucked into the self-fulfilling hype machine created by the media. Like most good conspiracies this is the result of group dynamics and not a consciously devised plot, but the effect is equally pernicious. The media, in need of a good story, latch onto the latest greatest technology and hype it to the moon: "XML is going to slice bread, put men on the moon *and* keep you company on a Saturday night." This has the synergistic effect of providing these same media outfits with a brand new story 6-12 months later when the hype inevitably proves unjustified: "Can you believe this, a year has passed and I still spend my Saturday nights watching Friends reruns, with XML nowhere to be seen." This phenomenon has been well articulated by Gartner Group as Technology Hype Curve.
--Matthew Gertner on the xml-dev mailing list
The work that the W3C is doing, including XQuery, XPath, and XML Schemas don't even come close to truly addressing the requirements of XML databases, and so vendors are forced to do what is necessary to provide the users with what they need. Sometimes this is bastardizing XPath, sometimes it's extending schemas to support indexing. These are not bad things, but are solutions being born out of necessity. This is an infant market. There is no right way to do things, and so there is no one vendor doing it right, much less doing it better.
--Tom Bradford on the xml-dev mailing list
Is it fair for a doctor to spend his 3-month salary on a copy of Microsoft Office? Should an artist spend a year's salary on Adobe Photoshop? No, this is wrong.
--Cheng Yi
Read the rest in China On Pirates: Blow 'Em Down
In 16 years as a Mac fan I've learned that it's OK to like Apple, but that one can never trust Apple.
--Adam Wildavsky on the java-dev mailing list
What we have enjoyed for ten years is the investment of the previous years of building and replacing nodes. We have had a free ride on every university that put in an Internet node and allowed all of the Internet traffic to cross it unimpeded by inspection or tariff. It has been an Internet of open borders given the natural wealth of each territory and its ability to sustain the open traffic. But the costs to improve that traffic, to offer more services to the travelers, are mounting and are being met by deficits. That is a recipe for stagnation at best, and collapse at worst.
Unless we are willing to nationalize these assets, or regulate them as public utilities, the privatization of the assets continues unhindered. The W3C isn't bad; it is too weak to forestall this rather predictable and necessary seachange. Because it attempts to be a hegemony, and because the realization of its vulnerability comes late to that organization, it is unlikely to withstand these pressures for long if at all and its role has to be examined from the perspective of what it can do successfully. As a technology incubator, it is successful. As an enforcement agency for member behavior, it is devastatingly underpowered.
--Claude L (Len) Bullard on the xml-dev mailing list
Those who can, do; those who can't, sue.
--Michael Swaine
Read the rest in WebReview.com: April 27, 2001: Swaine's Frames: IPgrams for New Cynics
While I think that powerful and expressive schema languages are a progress, I also think that imposing them would be a regression.
Schema languages are not that new and I am still thinking that one of the main progresses of XML over SGML is that DTDs are no longer mandatory!
And I think that it's important to make sure we can continue to perform XSLT transformations without defining first a schema.
--Eric van der Vlist on the xsl-list@lists.mulberrytech.com mailing list
I [speaking for myself, not my long-suffering employer, blah blah] would also submit that it is the DUTY of vendors in a fast-moving technology field to attempt to get real-world experience with desireable features (such as the ability to do full-text searching within XML elements) before proposing them for standardization. Standards are best when based on the intersection of field-tested technologies rather than the union of plausible technologies. &myusualrant;
"Embrace and extend" got a bad name because a certain large company was accused of trying to addict its customers to extensions that were simply different or more convenient ways of doing what could be done perfectly well within the standards. Offering customer-demanded extensions to fill obvious gaps in the standards is another thing entirely, ESPECIALLY if that knowledge is fed back to the standards keepers.
--Mike Champion on the xml-dev mailing list on the xml-dev mailing list
I still can't help feeling that the real heart of the problem is the PSVI. Because Schemas support the notion of a PSVI that includes "types" (sorry for use of that word), developers want to leverage that info. But that info is not in the instance document, and is not normally accessible to applications because prevailing schema processors just use the info for validation. The info is locked away in a black box.
--Michael Brennan on the xml-dev mailing list
--Jakob Nielsen
Read the rest in End of Homemade Websites (Alertbox Oct. 2001)
It is probably time for some to face up to the reality of what a technical consortium is: you pay fees to be a member and in accordance with the rules of the consortium, attempt to influence the consortia process to the benefit of your business. Did anyone really buy into the Moral Majesty argument? Do you really think that business goals and means change simply because of Berners-Lee's reputation?
--Claude L (Len) Bullard on the xml-dev mailing list.
--Tim Bray on the xml-dev mailing list
FOP just isn't robust enough for complex documents yet.
--Norman Walsh on the docbook-apps mailing list.
--Jonathan Borden on the xml-dev mailing list
The only normative definition of XML is syntactical. The only normative definition of namespaces is syntactical. These definitions are implemented by tons of interoperable software. The Infoset, simply because it has come after XSLT and XPath and DOM and SAX chronologically, is an afterthought. The PSVI is an elaboration of that afterthought. Working programmers are generating XML with various flavors of print() statement and reading it through a variety of interfaces (including Notepad :)) and not apparently having too much difficulty.
--Tim Bray on the xml-dev mailing list
You can write code that operates on any XML document, the proof is XML tools, but as soon as your code requires a semantic interpretation of an XML document, it has to follow a implicit or explicit schema
--Nicolas Lehuen on the xml-dev mailing list
It is very typical for an XML application to want to associate certain metadata with XML information items to suit certain processing needs. One can easily envision different metadata vocabularies to suit different domains. None of these are inherent in the instance itself. None of the processing done with the instance and associated metadata is a realization of the instance's true form. The only true form of the document is that which is in the instance itself, and that's just a bunch of text and pointy brackets.
--Michael Brennan on the xml-dev mailing list
a sure sign of the decline of a civilization is when they start to build walls to keep out the pagan hordes. Patents and copyrights are walls. Once a company resorts to lawsuits to keep out competition, its days are numbered. It ceases to see continued innovation as essential to survival.
--Jeff Lowery on the xml-dev mailing list
the W3C should adopt a policy of involving itself only with RF patents, recognize that this is difficult and complicated, and just deal with it. Tools that are available to achieve this goal include:
-
--Tim Bray on the xml-dev mailing list.
--Michael Kay on the xsl-list mailing list
A patent-encumbered web threatens the very freedom of intellectual debate, allowing only large companies and big media houses to present information in certain ways. Imagine where the web would be now if only large companies were able to use image files
--Alan Cox
Read the rest in The Register.
--Bruce Perens
Read the rest in HP Supports Royalty Free Standards for Web Infrastructure
You may remember the dark days of 1995 when, "Well, it works on my browser!" was a typical Web "designer's" response to problem reports. Netscape and Microsoft both claimed to be tired of playing this game, where a bug on one system had to be faithfully duplicated on the other, so that "working" documents would continue to work. XML was expressly designed to fail if non-compliant so that everyone would know that the document was bad.
--Christopher R. Maden on the xml-dev mailing list.
--Tim Bray on the xml-dev mailing list
I am not usually a big Microsoft fan. But I have openly lauded their efforts at standards compliance since Jean joined the SGML ERB. Oh, my friends mocked me - they said it wouldn't last. "Embrace and extend - it'll all end in tears, you'll see." But I was young and foolish, those dozens of Internet years ago. Now I see their wisdom, alas, too late.
--Christopher R. Maden on the xml-dev mailing list
this privacy you're concerned about is largely an illusion..
--Larry Ellison, CEO Oracle
Read the rest in Oracle boss urges national ID cards, offers free software (9/22/2001)
I'd rather see secured cockpits (as found in many nations) than the illusion that more and better ways to spy on non-criminals could prevent criminals from taking over planes.
--David Brownell on the xml-dev mailing list
The W3C succeeded based on stretching the facts about what they could do and what was reasonable to do. The web zealots publicly beat the hell out of the organizations and reputations of those who did have the right approach to standard infrastructure building in a way reminiscent of the mobs that burned the library of Alexandria. The damage was enormous and the results not nearly as good as promised. The dot.bomb removed most of their credibility in the investment world. Now it is "build it, demo it, market like hell, and maybe they will come" which is precisely the world before the W3C. Organizations don't matter as much as utility and perceptions of utility.
-- Claude L (Len) Bullard on the xml-dev mailing list
By forcing SGML and almost every other data language of note to the sidelines, by setting up an addressing system that ties all information to the systemic definitions, by insisting to the world that one group has a "moral" hegemony for Internet content and the specification of the systems by which it is obtained, the webHeads got the focus they were after. Now they can't live in the spotlight.
What does that mean? It means that almost every effort to use hypermedia theory and develop hypermedia applications became focused on exactly one medium, one organization, and to the eternal consternation of the markup specialists, one subset of SGML. All of the decades of research, researchers and resources are trying to pour themselves into one mold through one spec. Meanwhile, Berners-Lee and some of the core W3C architecture experts are squeezing out a backdoor called the Semantic Web with RDF, Notation 3, etc. leaving all the refugees they created behind in the somewhat squalid situation you have now.
--Claude L (Len) Bullard on the xml-dev mailing list
I estimate using JDOM rather than DOM has saved me at least 3 months work.
--Sasha Bilton on the jdom-interest mailing list.
--Jakob Nielsen
Read the rest in Mobile Devices Will Soon Be Useful.
--Edd Dumbill
Read the rest in XML.com: Picture Perfect [Sep. 12, 2001]
Sometimes you have to take the hit on the chin and say to your customers "This is a bug. We know you may have stored data in form X (and here is a tool to help _filter_ the problematic data for you before you deliver it to a client if you absolutely cannot repair your database), but it has to be changed because that behavior was a bug and your XML _will not_ interoperate with other people if it is not fixed now. And the more data you store like this the worse it will get."
--Benjamin Franz on the xml-dev mailing list 'Dido' Sevilla on the DocBook Apps Mailing List
there is nothing in XSLT 1.0 that prevents a sequence of instructions being executed in parallel. Occasionally the spec slips into describing the semantics as if exeution is sequential, but the language has been carefully designed so that the effect of an instruction never depends on the effect of a preceding instruction. The only "fly in the ointment" is extension functions with side-effects.
--Michael Kay on the xsl-list mailing list
Using words like "XML" and "WebDAV" for marketing hype seems to be popular, actually *implementing* them seems not.
--Julian Reschke on the xml-dev mailing list
While these are acts of barbarians and are despicable....let's us never lose our trust in the decency and humanity of human beings around the world. Fundamentally, at our deepest core, we are all brothers and sisters and the goodness and decency of all us will prevail.
--Rachel Foerster on the xml-dev mailing list
We are all angry, confused, scared and grieving. But New Yorkers look after each other in hard times. But more than just protecting ourselves physically we all need to keep our heads to avoid a political or economic meltdown. It would be the deepest tragedy if we signed away our civil liberties in our fervor to take vengeance on a shadowy terrorist enemy. The promise and priveleges of a democratic society are our greatest treasures. If we allow terrorists to take these things away from us the they have truly dealt us a death blow.
--Jonathan Kopp on the WWWAC mailing list
I'm starting to wonder if this isn't just another example of MS's infamous 'Embrace and Extinguish' policy in operation. XML works well (by *design*) on any OS and any computer language. It is open and available to all comers. It is being extremely successful in creating services that are not tied to any one platform. 'MSIE-XML' only works on Microsoft's software platforms but is trumpeted (at least in all the places the consumer is likely to see) as XML.
The _specific_ reason the XML specification says 'scream and die' on parse failures is *because of* Netscape's and Microsoft's web browsers being 'liberal' in 'parsing' HTML. One of the explict _goals_ of 'scream and die' was to prevent that from happening, again, in the browsers. For Microsoft to now say 'But MSIE isn't an XML parser (it just quacks like one most of the time)' as justification for breaking that _specific_ goal of XML is dis-ingenuous at best.
--Benjamin Franz on the xml-dev mailing list
The IE5 "XSL" isn't really a good implementation of the December '98 WD It has lots of extensions and the documentation makes no distinction between what is in the draft and what isn't, but that's really only a small point, what they really did wrong was take a draft that said
The XSL Working Group will not allow early implementation to constrain its ability to make changes to this specification prior to final release. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress".
and take that as the basis for a production release of what is probably the most widely distributed piece of software on the planet: their web browser which rather famously is/was deeply integrated with the entire windows OS.
By doing that they were guaranteeing future confusion over the version they'd released and they knew they were going to have to support and the final version of the language which they knew would be different.
Even now Microsoft documentation insists on calling this language "XSL" to avoid them admitting they made a mistake, and to prolong the confusion.
--David Carlisle on the xsl-list mailing list
I have a problem with specs that keep churning on the basics and never settle down long enough for the tool vendors to get the tools stable enough for the rest of us to make money. Internet time is a myth. Internet business failure is not.
--Claude L (Len) Bullard on the xml-dev mailing list
It is a best practice to design document schemas so that the structure of the resulting documents is evident from inspection of the documents themselves without requiring reference of the schema itself. (as much as is reasonably possible). For example HTML processors are perfectly capable of processing HTML documents -- without reading the HTML DTD each and every time the document is parsed --. Schemas can be incredibly useful, particularly during the software development phase i.e. assisting with the automatic generation of user interfaces, storage and indexing of documents in databases, debugging, etc. but still, it is a best practice not to needlessly rely on a scheme at every step in a production system.
--Jonathan Borden on the xml-dev mailing list
IE6 is fatally broken as an XML system.
--David Carlisle on the xsl-list mailing list.
--Steven R. Newcomb on the xml-dev mailing list
If we can keep Web services as simple as the Web, we'll have done very well.
--Tim Berners-Lee, Software Development 2001 keynote
the Hippocratic oath of the specification designer - when in doubt, try to pick the solution which leaves the most possibilities for later, better informed, evolution (this oath was obviously never administered to the Schema WG).
--Matthew Fuchs on the xml-dev mailing list
Everything is *about* something else, including markup and namespaces. If something wasn't about something else, it would have no reason to exist. Markup and namespaces are *about* labeling things, but so are many other formats, so XML has to be *about* something more than that in order to justify its existence. What it's about is becoming more lost in the sh*t pile as time passes.
--Tom Bradford on the xml-dev mailing list
I suspect that XML-RPC handles 75-90% of what people have been doing using HTTP and CGI for program-to-program communication forever. I have a hard time seeing SOAP / UDDI / WSDL doing much to improve on "URLs from Hell" (or URIs) as far as an orderly manner is concerned.
--Simon St.Laurent on the xml-dev mailing list
It is a plain fact that the vast majority of major XML parsers as well as significant XML related specifications such as XSLT, XHTML, XLink, XSD, RELAXNG, Schematron etc. are XML Namespace aware. Love em or hate em XML Namespaces are here to stay and we are best to define namespace best practices.
--Jonathan Borden on the xml-dev mailing list
Markup isn't *about* anything. Namespaces aren't *about* anything. They are labels. They provide part of the infrastructure you need to build edifices of schemaware and displayware and ebizware and so on.
--Tim Bray on the xml-dev mailing list
Emacs/PSGML is one of the best tools for editing XML (DocBook or otherwise). In fact, I never use anything else.
--Norman Walsh on the docbook-apps mailing list
of all of the things the W3C has given us, the DOM is probably the one with the least value.
--Michael Brennan on the xml-dev mailing list
You can see XML-RPC/SOAP and hangers-on as an RPC facility that is unusually transparent - in that you can see how it all works and implement it yourself without recourse to complex libraries beyond an XML parser - and at the same time unusually opaque - in that you really aren't going to break things by changing from a little-endian to a big-endian architecture, or from Win2K to Solaris, or whatever. These are probably good things in an RPC facility.
--Tim Bray on the xml-dev mailing list
the internet mania may have been fueled by hype and greed, but it was built on the solid foundation of 20 years experience with the internet itself, 10+ years of experience with SGML, several years experience with HTTP/HTML, and a hard core of people who had been using the basic technologies in academic/military/research settings for some years. The W3C looked awesome a few years ago when it could tap all that experience ... but we've pretty much used up the intellectual capital that funded its early successes. Trying to re-create the magic by using the labor of huge working groups and massive PR campaigns won't create a solid technological foundation ...only experience can create it ... and the best way to gain the experience is the incremental "what's the simplest thing we can do next that meets the most important needs" approach. The hype giveth a demand for XML products and skills, but the hype taketh away when irrational expectations are not fufilled.
--Mike Champion on the xml-dev mailing list
having been such a vociferous proponent of one side of this issue, I'll start by saying this was one of the most, if not the most, contentious issue(s) in the development of XML Schema. As a committee, we then chose the technically brilliant strategy of giving both sides what they wanted. As we can see, this remarkably improved XML Schema because, in retrospect, both sides were out to lunch.
--Matthew Fuchs on the xml-dev mailing list
the Web Services hype outstrips plausible reality by a wide margin; none of the "opera loving car" keynotes mention the little detail that the intrinsically greater latency, insecurity, and unreliability of the internet requires applications that employ Web Services to be designed much differently than LAN-oriented DCOM/CORBA apps are designed... and (potentially?) giving a new generation of script kiddies a simple way through all the world's firewalls scares hell out of me.
--Mike Champion on the xml-dev mailing list
Being a security-conscious person, I try to stay updated with the latest service packs. Unfortunately, SP2 for IE 5.5 was a service pack with a hidden agenda. It may have had a security fix or two in it, but was also designed to remove non-Microsoft product compatibility.
--Brad Mathis
Read the rest in IE upgrade cuts off QuickTime - Tech News - CNET.com
Our biggest concern has always been compatibility problems. We face a world where all companies use Microsoft Word, PowerPoint, Excel and other Microsoft applications, so we have to have compatibility and keep up with what Microsoft does. The idea of companies moving away from Microsoft is something that may not happen in the near future, if ever, so we have to explore other areas.
--Ransom Love, CEO Caldera
Read the rest in Caldera CEO mulls unified Unix/Linux - Tech News - CNET.com
Ironically, Microsoft has a unique duty, one that they all too frequently fail to do. They DO have a monopoly on client systems, and a significant presence on server systems. Whether that monopoly is deserved or not is not an issue that's relevant to this list. What is relevant is that as a monopoly they also are the clock that everyone else sets their watches to. If Microsoft fails to adopt a standard, then the chances that the standard will be adopted by anyone else becomes significantly more limited. they are signatories to the W3C, they are involved in all standards groups in the W3C, and so reasonably, they should provide at least basic level implementations of those specifications that are within the W3C purview that have BECOME recommendations. If they want to promulgate a superior way of doing things as well, that's great -- that's called innovation, and is something that Microsoft claims every time the government threatens to take them to task for stifling it -- but they should as responsible members of the W3C be willing to implement the basic level of support.
--Kurt Cagle on the xsl-list mailing list, Counterpane
Internet Security
Read the rest in Crypto-Gram -- 15 August 2001
.
--Ganesh Prasad
Read the rest in Linux Today - Guest Column: Will Open Source Lose the Battle for the Web?
Like many, I was originally dismayed by Netscape 6, both in its pre-release and "final" 6.0 forms. On both Mac and Windows, they were slow, buggy, and incomplete. But Netscape 6.1 is a different animal. Although it is nearly the same as 6.0 on the surface, so many things have been fixed and improved that it "feels" right again, and I've made it my default browser on both OSs as of yesterday.
(Netscape 6.1 reminds me much of the upcoming Mac OS X 10.1 in that sense -- now that the New Thing is out the door, the developers are concentrating on making it actually work.
--Derek K. Miller
Read the rest in MacInTouch Reader Reports: Netscape 6
People who are cracking copyright for the purpose of distributing content contrary to the legitimate control of the copyright owner or people who are cracking content for the purpose of redistributing for commercial purposes other people's content -- are criminals and they should be prosecuted as such. But you shouldn't lock up every technologist and make it impossible for them to experiment with encryption technologies merely because there are criminals out there. We don't do that with guns. I mean that's the bizarre thing, you know -- that employees at Smith & Wesson don't have to fear that the FBI is going to swoop down and arrest them because their products led to somebody being killed, yet employees of software companies need to fear that some FBI agent is going to swoop down and arrest them because it's possible that somebody used their code to steal the latest John Grisham novel.
--Lawrence Lessig
Read the rest in OpenP2P.com: The End of Innovation? [Aug. 07, 2001]
XSLT is pretty low in the list of 'things to learn' for average engineers unlike Java, JavaScript, HTML, and XML. This means that, in most situations, there will not be anyone to maintain XSLT-based solutions.
--Don Park on the xml-dev mailing list
the latest opinion from the District Court judge is so extreme that I think people are going to begin to get just how ridiculously extreme this is. You know, in that opinion, Napster had essentially said to the judge that they had found a way to make sure that 99 percent of the downloads would be essentially legal under copyright law. And the judge said, "99 percent's not enough for me. I want 100 percent," and so she basically ordered the company shut down until it could guarantee 100 percent. Now there's no technology that facilitates copying anywhere that's 100 percent effective. I mean imagine a court saying, "Xerox Corporation has to stop producing copiers until it can guarantee -- 100 percent guarantee -- that nobody will violate anybody's copyright law using a Xerox machine." But that's exactly the attitude, and it's that kind of extreme attitude that I think is most harmful to the opportunity for innovation here.
--Lawrence Lessig
Read the rest in OpenP2P.com: The End of Innovation? [Aug. 07, 2001]
Vendors did implement SGML browsers and they worked nicely, were seriously more advanced than Mosaic, etc. What they didn't do was put Internet engines under them, develop a sharable protocol, and give away product. Again, bad tactics since all three of these were suggested and in one case, IADS, the product was given away. What the Army would not do was release the source code and that was the WWW community norm at the time. Timing is everything. But the politics of the SGML companies also got in their way.
--Claude L (Len) Bullard, on the xml-dev mailing list
human beings understand hierarchies better than they understand flat relatinal models. There's a lot of applications developers who refuse to touch SQL, and I can understand their reasons. But a complex OO hierarchy they're okay with.
To understand the success of XML is to understand that point, IMHO. Hierarchies organize things better for human perception, even if they're more limited (or at least awkward) in the conceptual models they can represent.
--Jeff Lowery on the sml-dev mailing list
I have never - outside of using other W3C technologies - had a need for namespaces and a thin layer of skin peels away from the soles of my feet every time I think about them to hard.
--Sean McGrath on the xml-dev mailing list.
--Tim Bray on the xml-dev mailing list
XML currently suffers from the kitchen sink problem. The companies who participate in the consortia feel the need to send reps to every specification group that they think might be meaningful, and expect to see every widget they might need for their own proprietary product thrown in in exchange for their $$$ dues.
--Jonathan Borden on the xml-dev mailing list
To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior.
--Jakob Nielsen
Read the rest in First Rule of Usability? Don't Listen to Users (Alertbox Aug. 2001)
The W3C is a frat house. ISO isn't but only because government types don't do drugs.
--Claude L Bullard, on the xml-dev mailing list
The reason is not technical, for Windows has become a very capable operating system, but ethical. Linux is more than enough to satisfy the government's needs. The hundreds of millions of dollars spent on software license fees leave the country, never to come back. It's the Mexican taxpayers' money, and it could be better spent on developing national industry.
--Arturo Espinosa
Read the rest in Mexican Schools Embrace Windows
entity processing is by far the hardest part of parsing an XML document.
--Ronald Bourret on the XML-L mailing list
Binary XML is a dead end, obsolete before it was even started. Please stop wasting peoples' time on it.
--Mark Hughes on the xml-dev mailing list
I can't talk about the details because of the W3C creed of Omerta, but suffice it to say that the little inconsistencies between the data models of the extended XML specifications (DOM, XPath, XSL, XQuery, the PSVI, the InfoSet, ad nausem) are slowing W3C progress to a crawl. The solution of breaking a few things, radically simplifying, and starting over is not even politely listened to in W3C circles. Godfather Darwin is going to be taking XML (broadly defined) out to a landfill in 'Jersey before long. The only question in my mind is whether some other reasonably open markup language takes its place (SGML-lite? an ISO or OASIS-defined XML subset? An ad-hoc semi-standardized XML subset that everyone embraces and extends?) or whether we go back to the Bad Ol' Days of proprietary "post-XML" formats and tools.
--Michael Champion on the sml-dev mailing list
gzip is simple, fast, ubiquitous, standard, and gives you far better compression than any binary substitution scheme ever will.
After all, gzip compresses both the tags *and* the content, and can identify repeated sequences of <tag>content</tag>. Binary encodings can only compress the tags...
--Mark Hughes on the xml-dev mailing list
Depending on the data source, it is likely that SQL will be the most industrially used XML query language for some time. It has the advantage of clarity, ease, and years of experience in optimization. The XPath syntax is gnomic and prone to misinterpretation particularly when using namespaces. The bulk of data sources are relational.
--Claude L (Len) Bullard on the xml-dev mailing list from you for info that they stole from you.
--Scott McNealy
Read the rest in Ballmer talks .Net; McNealy scoffs - Tech News - CNET.com
perhaps it is time for a true markup-based browser minus the impediments of HTML bolted in support.
-- Claude L (Len) Bullard on the xml-dev mailing list.
--Ann Navarro on the xml-dev mailing list
That is the Bad Thing About XML: privatization of public assets by consortia with a follow on distortion of the perception of the need for international standards. We aren't doing ourselves or our heirs any favors with that policy or practice.
--Claude L (Len) Bullard on the xml-dev mailing list.
--Claude L (Len) Bullard on the xml-dev mailing list.
--Steve Gibson
Read the rest in The Attacks on GRC.COM
For the long haul, SGML is a safer better bet. Safety and convenience are sometimes uncomfortable bedfellows as anyone who keeps secure data on a Palm unit they leave at the airport finds out.
--Bullard, Claude L (Len) on the xml-dev mailing list
You complain about "pop-under" ads, cascading javascript pop-ups, ActiveX that snoops on you, cookies that snoop on you, and other such things. The reason you complain about them is because the manufacturer of your web browser believes it is profitable to subject you to those things. The manufacturer of my web browser is a community of software developers which does not have "profit" or "extending a monopoly" among their goals for a web browser, so my web browser protects me from all those things - the cookies, the annoying popping windows, the snooping.
--Michael Sims on the wwwac mailing list
Restricting names to letters and other symbols that are typically used for pronouncable, readable words in each language is not only good for catching transcoding errors (important in some places) and to allow easier use of the names as object names in scripts (where you don't want them to start with a digit), but very importantly it acts against people making random (i.e. private/proprietary) names in their DTDs as a way to capture users. They can still do it, of course, but they cannot pretend "oh, we didn't know a name should be readable so we just used UUIDs for all our names", batting their eyelids.
--Rick Jelliffe on the xml-dev mailing list
I'm still unclear what use the Infoset is to anyone. It seems to be trying to describe how many angels are allowed to dance on the head of the pin.
--Peter Flynn on the xml-dev mailing list
some folks are really troubled that an XML document is an object defined at the level of syntax; they feel that the data objects represented by the syntax are the important, central thing, and that the syntax is an ephemeral side-effect.
--Tim Bray on the xml-dev mailing list.
--Simon St.Laurent on the xml-dev mailing list
Together, we built a highway that everyone could travel, and Microsoft put up a tollbooth.
--Philip Gerskovich, Kodak
Read the rest in News: Kodak tangles with Microsoft over Win XP
The threat represented by Microsoft's forthcoming Windows XP operating system, with its confirmed ability to easily generate malicious Internet traffic � for NO good reason � can not be overstated. The proper executives within Microsoft MUST be reached with this message so that those plans can be reviewed in light of the potential for their system's massive abuse of the inherently trusting Internet.
--Steve Gibson
Read the rest in The Attacks on GRC.COM
I'm guessing, but I suspect the reason there is no built-in XPath function to retrieve the base URI of a node is that at the time the spec came out, there was still a great deal of debate going on within W3C about the precise definition of base URI within the InfoSet, so the XSLT/XPath authors decided it was best to steer clear of the rocks.
--Michael Kay on the xsl-list mailing list
I regret my silence when scripting was being added to eMail. It was the dumbest thing I had ever seen, but I didn't care since I use Eudora. So I didn't work to make the world take notice. Now eMail viruses are born daily to travel the Internet at light speed. And it could have � should have � been prevented.
--Steve Gibson
Read the rest in Denial of Service with Windows XP
I think calling XML a tree is oversimplification. The web is a graph. XML is the web made just a bit less sloppy, but we still have key/keyref and XLink, XPointer, RDF -- all that stuff John mentions. Take the graph that is the web and make it more machine-readable. Take all of the services and data in silos at the edges of the web and expose it as XML documents (as appropriate of course). Now you have one big huge honkin' graph. What is more fun that that?
--Joshua Allen on the xml-dev mailing list
XML has benefitted from the open source movement in terms of making tools available -- the price tags on SGML stuff was horrendous.
--Rod Davison on the xml-dev mailing list
There's a perceived (and, to my mind, false) dichotomy between "documents" and "data". All documents are data; all data can be expressed as documents. The main difference is between regular, or repetitive data, and irregular data. Many tools are useful for both domains.
--Christopher R. Maden on the xml-dev mailing list
the .NET Framework SDK which just released Beta 2 for download on MSDN, should eventually (sooner rather than later) replace MSXML as the start of the food chain. This .NET SDK is what was formerly known as the Universal Runtime (URT) and is the underlying class library for pretty much any .NET (managed/common language runtime) code. The System.Xml classes reside in System.Xml.Dll and have managed code implementations of XML manipulation APIs, XSD, etc. I'm only pointing this out because many people are unaware that there is a managed-code XML parser kit similar to MSXML, and much internal MS development has switched to using these classes. This is not to say that MSXML is dead; it is just to say that the System.Xml classes will be an increasingly large part of the foodchain as time goes on.
--Joshua Allen, Microsoft, on the xml-dev mailing list
Total buy-in to XML makes perfect sense as the real lock-in potential is further up the food chain. I am seeing a worrying amount of glib "We use XML therefore ZZZ is an open, standards compliant system" from vendors.
--Sean McGrath on the xml-dev mailing list
why not use the BSD license instead of the GPL? Because the GPL gives me no defense against parasitism. If somebody wants to take my work and return nothing to the community, under the BSD license they would be free to do so, and would be able to treat me as a sort of unpaid employee. There's no quid-pro-quo there - I'd feel like a dupe.
But there are places where that lack of a quid-pro-quo is appropriate. For example, I might choose to use the BSD license in software that is part of a standard that I'd like to be industry-wide, and thus I'd want it to be part of someone else's proprietary software. This is the strategy used by the new Ogg Vorbis sound standard that could replace MP3 - it's patent-and-royalty-free, BSD licensed.
But most of the time, I want that quid-pro-quo, I don't want my software to be Embraced and Enhanced by Microsoft, and thus I use the GPL. If Microsoft or some other commercial user wants my software badly enough, they can call me up and negotiate a commercial license.
--Bruce Perens
Read the rest in SV.com Roundtables. When you look at the thousands of lines of TeX macros or troff macros that people produce, it's a monument to the human intellect, but it's not really the right way to solve the problem.
--James Clark
Read the rest in Jul01: A Triumph of Simplicity: James Clark on Markup Languages and XML
XML is not a miracle. It was the selling of SGML to those who didn't like SGML because those who sold it to them told them not to, thus�neatly enabling a private consortium to take control of the intellectual property of the International Standards Organization and privatize the ownership of it.
-- Claude L (Len) Bullard on the xml-dev mailing list.
--Peter Flynn on the xml-dev mailing list.
--James Clark
Read the rest in Jul01: A Triumph of Simplicity: James Clark on Markup Languages and XML
The greed that drove XML into the public consciousness died with the dot.bomb at the cost of�a few trillion invested and now vapor fading like the lights in a rolling blackout.
--Claude L (Len) Bullard on the xml-dev@lists.xml.org mailing list.
--Charles Simonyi
Read the rest in EDGE Digerati: The WYSIWYG - Charles Simonyi [page 4].
--Claude L (Len) Bullard, on the xml-dev mailing list
I recommended to my partners that we go Windows-only in 1996. Why? Because by giving up Macs, I told them, we wouldn't spend time integrating all our different computers and could instead use computers to our advantage. Boy, was I wrong: It is as hard to maintain and integrate Windows computers as it is to integrate multiple kinds of computer systems.
--Stewart Alsop
Read the rest in Fortune.com
XML isn't magic. It is a computer science. Getting people to agree to use the agreements, that's magick.
--Claude L (Len) Bullard on the XML-Dev mailing list.
--Richard M. Stallman
Read the rest in Harm from the Hague - GNU Project - Free Software Foundation (FSF)
Forcing users to browse PDF files makes usability approximately 300% worse compared to HTML pages. You should only use PDF for documents that users are likely to print.
--Jakob Nielsen
Read the rest in PDF - Avoid for On-Screen Reading (Alertbox June 2001)
Big business is hijacking the Internet. We're creating new tollbooths on our systems.
--Jeff Chester, executive director of the
Center for Digital Democracy
Read the rest in Gated communities on the horizon - Tech News - CNET.com
My view is that no company can both sell the software that handles the global registry and sell the service too. That isn't a technical issue but something like saying a gun manufacturer can't field its own army. If the Hailstorm services depend on Microsoft owned servers, it's dead. They aren't allowed to be a troll beneath the bridge.
-- Claude L (Len) Bullard on the xml-dev mailing list
*If* RDDL takes off, its role is going to be pretty damn central. Since the design of XML empirically has a bias in favor of using multiple related resources to do one job for a class of data objects, whatever's used to tie them together is important.
--Tim Bray on the xml-dev mailing list Borden on the xml-dev mailing list
AOL Time Warner and Microsoft will probably take over some 70 to 80 percent of everything--Web access, Web usage, whatever. An element people are touching on is that it's not the content that's important--it's the functionality.
--Ken Lim, Cybermedia Group
Read the rest in Gated communities on the horizon - Tech News - CNET.com
RDDL is a pack o' XLinks. It's a good idea and well done but not a core piece. It is an application language that one may adopt to align pieces just as one might learn Topic Maps. But learn XLinks first and then RDDL/Topic Maps. That is what I mean by moving parts. Write a RDDL if you need one. Write a Topic Map if you need one.
--Claude L Bullard on the xml-dev mailing list
The real founders of XML are, more than anything, the founders of SGML, since XML is mostly SGML with the configurable bits lopped off.
--Rick Jelliffe on the xml-dev mailing list
I am also happy to say publicly that Microsoft have some excellent XML tools.
Which makes it all the more disappointing that their general release browser and operating system products still contain a far-from-excellent processor for a language that is a distant cousin of XSLT, as a result of which I see a high volume of requests for help from users who have got themselves into a serious and expensive mess as a result.
--Michael Kay on the xsl-list mailing list
XHTML 1.1 is good for people who need to write their own variants of XHTML and don't want to just edit the DTD but instead want to edit the DTD.
So it's moderately useful for: theoretical browser makers (but who?), theoretical authoring tool makers, people who develop versions of HTML for internal use in a company (presumably for good reasons), and folks who find the arbitrary decisions of XHTML 1.0/HTML 4.01 to be pointless and want to do their own spot fixes.
So basically it's not all that useful.
--Kynn Bartlett on the XHTML-L mailing list
By definition, low resolution is less clear than high resolution. Fonts at 96dpi are clearer than fonts at 72dpi. The problem with the Mac is and has been that font sizes are pixel based, which unfortunately means that higher resolution equals smaller text.
Fonts need to be scalable so that regardless of the resolution of the output device fonts appear the same size. Higher resolution should be just that -- higher clarity. The way text on a 600 dpi laser printer looks cleaner, but the exact same size, as text on a 300 dpi laser printer.
--David Spancer
Read the rest in MacInTouch Reader Reports: Display Resolution Issues.
--Clay Shirky
Read the rest in OpenP2P.com: Hailstorm: Open Web Services Controlled by Microsoft [May. 30, 2001]
Here's Britain, the foundation of democracy and freedom, building its govermental infrastructure on proprietary binary-only technology from a known predatory monopolist. In a free market democracy our governmental infrastructures should be permanently open to competitive bid. You should never be locked into a single-source supplier. That's just a fundamental architectural mistake.
--Bob Young, CEO of Red Hat
Read the rest in LinuxUser - Issue 11 - microsoft.gov.ok?
With the advent of Quartz, Apple is finally in a position to offer the most significant visual upgrade since they began supporting MultiSync color monitors -- fully scalable on-screen EVERYTHING!
More PPI in a display is GOOD! Or it would be, if only I could tell the system two things: the pixel dimensions of the display and the physical dimensions of the display. Quartz should be perfectly capable of scaling everything on-screen to WYSIWYG sizes (or whatever multiple or fraction thereof the user chooses), and displaying at the best resolution the display is capable of.
100 PPI? Fooey. Give me more! 300PPI! (or 600, or 1200, or...). I want to be able to set my video card to display as many pixels as it and my monitor can handle... SO I SEE TEXT WITH FEWER JAGGIES! My aging eyes would really appreciate it.
--Dean F. Sutherland
Read the rest in MacInTouch Reader Reports: Display Resolution Issues
XML standardizes a syntax for labelled nested structures holding textual content. For a lot of us, that's all the standardization that is presently appropriate. I'd like to have a chance to work on that level before we start declaring that piles more standards are necessary to get real work done.
--Simon St.Laurent on the xml-dev mailing list
XML is a great leap forward from OO, precisely because XML is text. An XML instance--the most concrete realization for which XML itself provides--is still amorphous, even abstract, with regard to the physical instantiation which it will be given by a process. Put differently, the data structure exhibited by an XML instance is still capable of sufficiently variable realization in process as to bridge the lack of shared data definition between the autonomous nodes of the internetwork topology.
--W. E. Perry on the xml-dev mailing list
Is XML post-OO? No. It is pre-LISP.
--Claude L (Len) Bullard on the xml-dev mailing list
XML is just smart ASCII.
--Claude L Bullard on the xml-dev mailing list.
--Michael Brennan on the xml-dev mailing list
What - you're surprised we're slipping? This is software, not the atomic time clock... 8-)
--Shane Curcuru on the xalan-dev mailing list
There were lots of styles of mixed content that looked good on paper, but did face-plants when implemented in live systems. The particular style of mixed content that's allowed in XML turned out to be the only reliable one.
--Eric Bohlman on the XML-L mailing list
Avoiding mixed content might make life simpler, but so would eating nothing but sardines. Mixed content is very necessary in real-world applications, at any rate those dealing with textual data.
--Wendell Piez on the XML-L mailing list
When it was suggested early in the XML rhubarb that DTDs would go away, (well-formed only), I laughed. It removes the biggest advantage of SGML: standard vocabularies for focused domains, the easy means to annotate a text with inline metainformation for interpretation. Now people are defending DTDs against the next new thing and so it goes, but the principle remains: once you get beyond a simple message, well-formedness isn't enough. You need the metadata to get around the outrageous and inefficient noise reduction techniques of open text searching.
--Claude L Bullard on the xml-dev mailing list
Search is the user's lifeline for mastering complex websites. The best designs offer a simple search box on the home page and play down advanced search and scoping
--Jakob Nielsen
Read the rest in Search: Visible and Simple (Alertbox May 2001)
Despite vendor assurances, it seems plain that the race to compete on implementation puts the viability of multivendor institutions like OASIS at risk. The increased rivalry cannot but help spill over into these environments. One example is the size of the newly formed XML Protocol working group at the W3C. With over forty participants, the size of the Protocol WG exceeds even that of last years' political football, the Schema WG. Everybody wants in, and the most likely victim will be the protocol specification itself.
--Edd Dumbill
Read the rest in XML.com: XML Hype Down But Not Out In New York [Apr. 11, 2001]
What is RDDL really? It's a catalog XML-Dev built so a namespace reference could be resolved a year after "reasonable minds" blessed non-resolution while "experienced minds" sighed and said, "that won't hold". They declare a minimal victory, confuse the hell out of the world, then come back a year later, wave their hands over it and say, "RDDL me this."
--Claude L Bullard, on the xml-dev mailing list.
--C. M. Sperberg-McQueen on the xxml-dev mailing list
XML 1.0 removed syntactic diversity, letting us use each others's tools and share information without worrying about byte-level issues, much as TCP/IP provides a foundation on which other network applications can build.
On the vocabulary and meaning levels, however, I'd suggest that it does the reverse. While some see standardization of those levels as the next big task of XML, I think there's a much more exciting opportunity for programmers and users to represent information in the forms they find most convenient to their particular circumstances.
That would mean an explosion of diversity (vocabularies) in a much smaller set of circumstances (as XML replaces thousands of other possible base formats).
--Simon St.Laurent on the xml-dev mailing list
The CSS support of even simple things when you use XML as XML is pretty appalling.
--Andrew Watt on the XHTML-L mailing list
there are no secrets or conspiracies. But there are a lot of mailing list archives, meeting minutes,etc available to any W3C members that MAY contain bits of information critical to "correct" implementation that never got written down in the spec. I'm remembering four years of working on the DOM WG -- this stuff happens, and you only learn about it when someone tries a "clean room" implementation from the spec and nothing but the spec. Goodwill and hard work aren't enough ... some things also require time before you can KNOW that the spec contains all the information needed to implement in an interoperable way.
--Mike Champion on the xml-dev mailing.
--C. M. Sperberg-McQueen on the xml-dev mailing list
XML is SGML as practiced. I think it had something to do with the W3C running the show instead of ISO.
--Claude L Bullard on the xml-dev mailing list
XML has terrific potential, most of which is unlikely to be realized because of complexity, awkwardness, and the inherent unreliability of the institutions upon which XML will be applied. But this doesn't mean XML is not worth doing, just that the dividends from Ballmer's XML Revolution are likely to be modest. It will work best in tightly-defined and constrained applications where it is in the interest of all parties for the system to work.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit
I wouldn't say that W3C XML Schema cannot support document centric applications, but rather that it's not a good fit since its features are biased toward data applications.
--Eric van der Vlist on the xml-dev mailing list
There are a few problems with frames -- but most of these have been overcome and frames aren't really all -that- bad these days. The worst I can say about properly done frames, from an accessibility point of view, is that that they are very strongly a _graphical_ metaphor and that's difficult translate well for non-graphical users. But that can apply to nearly anything on the web that's graphical in nature, such as using tables to lay out a page.
The problem isn't really frames, it's poorly done frames.
--Kynn Bartlett on the XHTML-L mailing list
the defects of W3C XML Schema will be perceived by most as defects of XML.
--Eric van der Vlist on the xml-dev mailing list.
--Henry S. Thompson on the xml-dev mailing list
People kicked DTDs in the head for two decades. They have never died. People lauded various open document standards for most of that period which were ostensibly techically superior and now hydrogen can't float them. Other languages like VRML, touted widely as dead, going nowhere, cling tenaciously to the niche and will not be extinguished, in fact, have spawned competitive children (X3D, XMT, RM3D) that will interoperate.
Give XML Schema a year. If the market adopts it, there is no cause to complain. If the market doesn't, there is no need.
--Claude L Bullard on the xml-dev mailing list
The real danger is in assuming, and having specifications assume, that validation will be part of normal processing. For example, if XPath 2.0 is expanded to include operations that require a validated instance, there will be interoperability issues: if I send an XML instance and an XSL stylesheet to different processors, the might produce different results. XPath is already very close to being too heavyweight.
A good example of the issues can already be seen in the differences in documents parsed with and without DTD's. If validation with XSchema or any other tool results in an infoset with many "optional" peices, and specs are built on that infoset, there *will* be issues.
--Gavin Thomas Nicol on the xml-dev mailing list
At the risk of violating the dreaded W3C Omerta oath ... looking at the official "votes" on Schema, I don't see a lot of evidence that most W3C members took a terribly close look at it and weighed the costs and benefits; they figure that the world needs a Schema spec, so they ASSUME that what the Working Group came up with is a Good Thing.
--Michael Champion on the xml-dev mailing list
I think the semantic web is an excellent idea, but it's not something that most humans should be involved with or get excited about. It's like saying that I think that clean drinkable water distribution is an excellent idea - but I hardly know anything about how it's done, I just turn the tap on, same as anyone else.
--Ian Tindale on the XHTML-L mailing list
In a short lifecycle message format (system outlasts or is more pervasive than data) where you can either throw it away or archive it, one gets back to form, fit and function debates. SGML wasn't used for protocols, so maybe this is a new wrinkle, but I suggest it is more related to archival so in that sense, the same advantages: recoverability and reusability. Addressing INTO a binary requires some tricks not used a lot since HyTime unless one is mapping to an abstraction.
Ever since the SGML binary discussions (circa 93?), this idea comes up at least biannually. It is like aliens: if they are here, where are they? The binary requirements can be asserted but one soon discovers that versions exist, none have been adopted widely and begins to ask why. The answer is usually that all other tradeoffs and conditions accounted for, there isn't enough cost benefit to justify adding yet another format to the support soup. Do protocol requirements offer a more compelling case than short lifecycle documents (where WYSIWYG turned out to be a good idea over markup: final fixed format vs archival format)?
--Claude L Bullard on the xml-dev mailing list
Yet, the evidence that I believe several list members were seeking was more a review of the current real (and not perceived) bottle-necks in processing textual XML (documents or otherwise). And, as RickJ noted, these bottle-necks may be alleviated by using techniques *other* than binary encoding.
The general problem (at least to me) is much more interesting: identify areas that could be improved, and then seek mechanisms for solving them. Binary encoding is just *one* possible solution, and (like all optimisations) will involve a trade-off of other features. Casting a wider net (short-tagging, binary indexes, lazy DOMs) might actually produce something beneficial in a greater number of cases.
--Leigh Dodds on the xml-dev mailing list
Historically, "standards" have involved some sort of enforcement mechanism: this is a gallon, and customers have legal recourse if they're sold something that doesn't conform. Say, the gas station doesn't correct for the expansion due to summer heat ... many weight scales must be certified too.
Of course that's exactly what a lot of software vendors are very afraid of letting customers have, even in limited scopes such as fully conforming to W3C specs (perhaps in order to be able to use W3C trademarks).
--David Brownell on the xml-dev mailing list.
--Bob DuCharme on the xml-dev mailing list
One of the dubious joys of repeatedly attending XML conferences is hearing the major vendors (particularly Sun and IBM in this case) drone on about how XML will make your car understand which pair of pants you want back from the cleaners. Please! Show me the pointy brackets.
--Edd Dumbill on the xml-dev mailing list
every binary RPC protocol I've ever seen has been converted, sooner or later, into a conduit for proprietary platforms. Fragmenting a previously-unified (XML=text) world by creating a binary variant seems a fine start, for any organizations wanting to head that direction. Large vendors can afford the duplicate investments, when they can foresee it opens the door to more vendor lock-in. The rest of the world may well prefer to do smarter things with their time/money than helping raise more barriers to market entry.
--David Brownell on the xml-dev mailing list
I just wasted a weekend getting my schema validator to dump the internal form of the 'compiled' schema-for-schemas, on the _assumption_ that reloading that would be faster than parsing/compiling the schema-document-for-schemas every time I needed it. Wrong. Takes more than twice as long to reload the binary image than to parse/compile the XML.
There are _lots_ of people out there working hard to make parsing/writing XML blindingly fast. With respect, you're unlikely to beat them.
--Henry S. Thompson on the xml-dev mailing list
WAP is a miserable multibillion-dollar failure... some have argued convincingly is that they crippled the design based on the assumption that the devices would have to be stupid and the connections slow; leading them to, among other things, binary XML.
--Tim Bray on the xml-dev mailing list
I do a lot of work with WAP and experience with it has turned me off binary XML encodings fairly comprehensively. I don't think WAP demonstrates the advantage of a binary encoding. I think it demonstrates quite the opposite..
--Sean McGrath on the xml-dev mailing list would that speed up the whole application? You'd need to know what proportion of its time it spends parsing/generating XML. In some apps, this proportion is going to be very small. Bray xml-dev mailing list
I see RDF and tend to think of data mining and discovery. I see Topic maps and tend to think of drill down and visualization. RDF is closer to the system tables; topic maps closer to what is in the treeviews.
--Claude L Bullard xml-dev mailing list
Even though XLink and RDF are targeted at different purposes, it's still a fair observation that XLink has a lot (not all) of the power of RDF.
--Eve L. Maler on the xml-dev mailing list
In the closing days of getting XML 1.0 out the door, a lot of *reasonable* requests for enhancements were, in good software engineering style, kiboshed as being "for 1.1". Once 1.0 got out the door, everyone developed a strong case of (healthy IMHO) paranoia about screwing with the thing, and personally I'd be astounded to see anyone take on XML 1.1 in my lifetime; the cost is very high and the need doesn't seem that great. So it's legit to suspect that to push things into 1.1 is to kill them.
--Tim Bray on the xml-dev mailing list
XSLT curls my hair as well, though there are a growing number of people who seem to love it.
If you stick to relative simple problems initially - start with a document model that's tolerably well-designed, and then map it to some formatting that doesn't involve reorganizing your entire book - it's not so bad. A lot of the obfuscated XSLT out there is designed to process complex data structures that were sort of weakly thrown into XML and now need substantial cleanup.
--Simon St.Laurent on the "Computer Book Publishing" mailing list.
--Michael Brennan on the xml-dev mailing list
the trick with building a web browser these days is not so much to build the basic browser, but to deal with all the weird ways the major web sites screw up the code on their sites. So it's really hard to build a browser that will actually properly display all the popular web sites.
--Bart Decrem
Read the rest in linuxpower.org Eazel: After the earthquake
The Magic Problem Solver du jour is XML, or Extensible Markup Language, a system for describing arbitrary data. Among people who know nothing about software engineering, XML is the most popular technology since Java. This is a shame since, although it really is wonderful, it won�t.
The truth is much more mundane: XML is not a format, it is a way of making formats, a set of rules for making sets of rules. With XML, you can create ways to describe Web-accessible resources using RDF (Resource Description Framework), syndicated content using ICE (Information Content Exchange), or even customer leads for the auto industry using ADF (Auto-lead Data Format). (Readers may be led to believe that XML is also a TLA that generates additional TLAs.)
--Clay Shirky
Read the rest in XML: No Magic Problem Solver
DOM doesn't really match the Infoset. It was either do it DOM's way from the beginning or give up on DOM compatibility. We chose the latter.
--John Cowan on the xml-dev mailing list
The bottom line: we (users, W3C, marketers) should treat XML Schemas 1.0 as a well-made, interregnal, comprehensive schema language not the mandatory, ultimate, be-all-and-end-all, universal schema language of fantasy.
--Rick Jelliffe on the xml-dev mailing list
I think validation will eventually represent a minority of XML Schema implementations. The main function of XML Schemas in my mind is that they provide metadata and metadata allows you to drive all sorts of applications. A simple example is an editor that reads an XML Schema and formats the UI accordingly. A more complex example is an application that reads an XML Schema and generates classes or database schema from it.
--Ronald Bourret on the xml-dev mailing list
you can debate whether it is right or wrong till your heart's content, but just as there was an era before the notion of intellectual property, we are now entering into the "post intellectual property" era. And just as there was music, literature, art, and innovation before these laws were invented, there will be after the laws are gone.
--Howard Ires on the WWWAC mailing list
If users are accustomed to "editing documents" a la Word or other word processor, they'll find doing so with XML Spy an exquisitely excruciating experience. :) This isn't a knock on XML Spy, but it definitely is oriented toward data-centric (vs. document-centric) XML.
--John E. Simpson on the XML-L mailing list
XML is just a tool. Not at all interesting by itself, but a handy thing to know when building a solution.
--James Robertson on the xml-dev mailing list!
--Michael Champion on the xml-dev mailing list.
--Michael Brennan on the xml-dev mailing list.
--David Megginson on the xml-dev mailing list).
--Michael Brennan on the xml-dev mailing list
Most websites, when faced with the choice of spending 1 million dollars on advertising to increase site visits and spending $50,000 on usability to convert more visitors to customers, would have spent it on advertising. Metrics were centered around traffic rather than profits. This is a large part of why they failed.
--Scott Shirley on the wwwac mailing list
XML took a complex set of problems, broke the components into smaller pieces, picked off the easy ones to convince everyone that it was the way to go. Now that that has been completed, we're left with trying to resolve the original but deferred complexity as well as the new complexity between the smaller components.
--Marcus Carr on the xml-dev mailing list
There is a place for XML, but it is not in the programming language proper, as XSLT has shown oooh so clearly.
--Clark C. Evans on the xml-dev mailing list.
--David Megginson on the xml-dev mailing list Bray on the xml-dev mailing list
I have said repeatedly over the years, that I will entertain the encoding of Klingon when the tribble-kissing wimps at the Klingon High Command beam an armed delegation into a UTC meeting and demand the encoding of their script. Until then, I see no reason to consider encoding this script.
--Rick McGowan on the Unicode mailing list
In SQL, the query language is not expressed in tables and rows. In XQuery, the query language is not expressed in XML. Why is this a problem?
--Jonathan Robie on the xml-dev mailing list
XML standards are the latest in a series of great hopes in IT. XML standards will provide users with vendor independence. XML standards will strip all of the latency out of intercompany operations at a low cost. XML standards will create a single global electronic market enabling all parties irrespective of size to engage in Internet-based electronic business. XML standards will provide for plug-and-play software.
Does any of this sound familiar to you? It should because we've heard promises just like these for standards in Unix, objects, and various network protocols. These promises are the marketing, not the reality, of XML standards. Early experience with RosettaNet and Microsoft's SOAP indicates that XML standards provide some leverage for some problems in small-scale systems. The backlash is inevitable, and can be fatal even to well-considered standards efforts.
--John R. Rymer
Read the rest in Why 90 percent of XML standards will fail.
--Frank Willison
Read the rest in xml.oreilly.com -- The Relentless March of Computer Abstraction
XQuery = XSLT - templateRules - nonAbbreviatedXPathAxes
--Evan Lenz on the xsl-list mailing list
the value of a CS degree is a matter of constant debate. Most of the 'founding' XML community seems to be a bunch of humanities majors, as are a lot of the HTML folks I know.
--Simon St.Laurent on the XHTML-L mailing list.
--Claude L Bullard on the xml-dev mailing list.
--Bill Joy
Read the rest in OpenP2P.com: A Conversation with Bill Joy [Feb. 13, 2001]
SGML in those days was OnlyForPrintingBigBooks. The graphics folks were busily trying to grab for the top of the abstraction tree (who owns the parse), and getting any three vendors to agree on a network was almost impossible. So much IT had to be devoted to the "glue" it cost big bucks and it seldom could be reproduced. That is what CALS was about or supposed to be. In the beginning (Computer Aided Logistics) CALS was close to being an "even entry point" in that there were just three of four standards and some flexibility for implementation. Basically, it was a file forward lobster trap system. By the time it became Commerce At Light Speed four billion dollars later, it was such a hodge-podge of options, no one wanted to fool with it. Along came the web with No Options (HTML - love it or leave it) and things moved again while the three decades of markup development repeated.
It is like rock: from three-chord blues to jazz every generation, then a collapse of unsustainable complexity back to unendurable simplicity. It is a cycling learning curve driven by the ratio of experienced users to newbies.
--Claude L Bullard on the xml-dev mailing list
It's impossible to get people to use standard, valid HTML these days -- getting them to use appropriate metadata is that problem squared.
--Edd Dumbill
Read the rest in P2P Goes in Search of 'Doogle'
XML Doesn't Care. XML Doesn't Know. Care and knowledge are in the application processor. It is a local network node with layers of interpretation above and below it.
--Claude L Bullard on the XML DEV mailing list.
--Marcus Carr on the xml-dev mailing list.
--W3C XML Core Working Group
Read the rest in XML Fragment Interchange
Retain <xsl:script> - and soon there will be a lot of stylesheets written 50% in XSLT and 50% in Java, which are STANDARD and probably even more stylesheets written 50% in XSLT and 50% in VB, C#, etc, which are NON-STANDARD. Would it really improve interoperability? And why Java is in more privileged position than any other language?
--Alexey Gokhberg on the xsl-list mailing list
So if you want a one or two week project, implement XML. If you want a six month to one year project, implement SGML.
--Rick Jelliffe on the xml-dev mailing list
I'm kind of fed up with reading about "data centric" vs "document centric" XML. I thought part of the promise of XML was that I would eventually be able to handle documents as data and vice versa- the distinction would be moot.
--Linda Grimaldi on the xml-dev mailing list.
I have to ask myself why I don't just switch my server to FreeBSD.
--Moshe Bar
Read the rest in Byte > Column > Linux 2.4 vs FreeBSD 4.1.1 > Qualitative Results > January 30, 2001
Confusing a demo design with a prototype for a working project is the kind of mistake amateurs make all the time--show a hot demo to the sales force and they want to know when it is going to ship.
Usually, the people actually engaged in the project well understand the difference. Apple seems to have lost all ability to tell real from imaginary. We saw this phenomenon with the round mouse, a "cool" design that was completely impractical. It took two years of the trade press calling them idiots before they finally pulled the mouse from the market. That protest was nothing compared to what the Dock has generated, and still they are hanging on.
--Bruce Tognazzini
Read the rest in Top 10 Reasons the Apple Dock Sucks Rutledge on the xml-dev mailing list
Being turing complete, one could write a regexep string matcher in XSLT, if you had a spare month or two to write it, and your users had a similar amount of time to run it.....
--David Carlisle on the xsl-list mailing list
There is nothing wrong with allowing people to optionally choose to buy copy-protection products that they like.
What is wrong is when people who would like products that simply record bits, or audio, or video, without any copy protection, can't find any, because they have been driven off the market. By restrictive laws like the Audio Home Recording Act, which killed the DAT market. By "anti-circumvention" laws like the Digital Millennium Copyright Act, which EFF is now litigating. By Federal agency actions, like the FCC deciding a month ago that it will be illegal to offer citizens the capability to record HDTV programs, even if the citizens have the legal right to. By private agreements among major companies, such as SDMI and CPRM (that later end up being "submitted" as fait accompli to accredited standards committees, requiring an effort by the affected public to derail them). By private agreements behind the laws and standards, such as the unwritten agreement that DAT and MiniDisc recorders will treat analog inputs as if they contained copyrighted materials which the user has no rights in. (My recording of my brother's wedding is uncopyable, because my MiniDisc decks act as if I and my brother don't own the copyright on it.)
--John Gilmore
Read the rest in What's Wrong With Content Protection
In this week's InfoWorld test-center editors and analysts chose the 10 most significant technologies for business in the year 2000. They made some respectable choices. Naming XML as the year's most significant business technology wasn't one of them. I'm not arguing that it hasn't had a big impact on business, although I would strongly suggest that it shouldn't have been rated No. 1.
But come on folks, since when is XML a technology? XML really isn't much more sophisticated than those label makers that are used to dial up letters and punch a name into a plastic strip. Yes, I know that when it comes to XML it's not the label that matters but the standard. And I admit that as a standard, XML has been one heck of a boon to business. But when I think of XML I think of the Dewey Decimal System. And to me, Dewey just ain't a technology.
--Nicholas Petreley
Read the rest in Java's future lies with Linux
One of the great things about 10646 and Unicode being in sync is that there are some people who do not trust industry -- so they can just embrace ISO. Others do not trust anyone but the actual people doing the implementing -- they can embrace Unicode.
--Michael Kaplan on the Unicode mailing list
the utility of grammar-based schemas is as well-established as the utility of the horse-drawn stump-jump plough: yes we can do excellent things with them that we would not do without them, but if we are not Amish why use a horse when there are shining tractors with air-conditioned cabs and diverting CDs of Dwight Yoakim yodelling and the Georgia Peach singing?
If we know XML documents need to be graphs, why are we working as if they are trees? Why do we have schema languages that enforce the treeness of the syntax rather than provide the layer to free us from it?
--Rick Jelliffe on the xml-dev mailing list
This is the worst kind of political exploitation. It takes schools off the hook and turns the complex process of school administration over to adolescents. Kids will ultimately have to live in fear that the desk mate they jostled with will turn them in, or that bragging about exploits on Doom will get them turned into W.A.V.E. as "unbalanced."
If a teen or a parent becomes aware that a classmate has a gun and plans to use it, there are plenty of cops and law enforcement officials they can call. There is no statistical evidence to support the notion that schools are so dangerous that children need to be manipulated into turning one another in. Nor is there much doubt about who will be targeted - geeks, nerds, Goths, oddballs, along with anyone else who is discontented, alienated and individualistic.
That kids are being asked to do this is revolting enough. That they are being asked to do it by a profit-making private corporation suggests a culture much sicker and more dangerous than most school kids.
--Jon Katz
Read the rest in Slashdot | Voices From The Hellmouth Revisited: Part Ten
I do find it interesting that every techie in Redmond, home of the �bergeeks, had to be hunched over those servers, looking at what was happening on that network. So even though they are saying someone took advantage of them when they were down and out, in truth it should have been harder to get in when they were all on the alert. Sort of like showing up in the afternoon to rob a bank when the feds are still there investigating that morning's robbery.
--Tepes
Read the rest in Microsoft Crashes: The Fallout
Talking to WAP Forum is like talking to old Soviet Politburo committee about problems of Communism. You can throw 30K to join the ranks of engineering-bureacratic-politicians from carriers, handset manufacturers, and mCommerce wannabes, but you will be better entertained in a mental institution.
--Don Park on the Xml-Dev mailing list?
--John Gilmore
Read the rest in What's Wrong With Content Protection
As far as "optimization" of data formats goes in general, remember that the law of evolution known as Fisher's Fundamental Theorem of Natural Selection applies to human inventions as well as biological organisms. It says that the better adapted an organism is to its current environment, the less change in its environment it can survive. In the realm of data formats, this means that the effort spent optimizing a data format is likely to be wasted as soon as the data to be conveyed changes, because the optimization took advantage of what were effectively limitation on the data to be conveyed.
--Eric Bohlman on the xml-dev mailing list
My children at primary school are better trained on the Internet than the local police are
--Tim Snape
Read the rest in ISPs 'RIP' Into British Police
Consumers have a message for companies trying to figure out why the wireless Web market has failed to take off in this country: It's the screen, stupid.
--John Borland
Read the rest in CNET.com - News - Communications - A high-wireless act
Congress has effectively allowed Hollywood to write a statute that turns copyright holders' wishes into federal law with severe criminal penalties for anyone who does not comply with those wishes, Important individual rights like fair use, first sale, and the public domain are eliminated by the statute's sloppy handling of civil liberties.
--Robin Gross
Read the rest in Copyright: Your Right or Theirs?
Consumers have a message for companies trying to figure out why the wireless Web market has failed to take off in this country: It's the screen, stupid.
--John Borland
Read the rest in CNET.com - News - Communications - A high-wireless act.
--Richard Stallman
Read the rest in GNUPedia Project Announcement - GNU Project - Free Software Foundation (FSF).
--Claude L Bullard on the xml-dev mailing list
Cutting-edge technology does not give a company the right to cut jobs without proper notice to employees.
--Connecticut Attorney General Richard Blumenthal
Read the rest in CNET.com - News - E-Business - Connecticut sues Net company for labor violation
the future clearly lies in abandoning both HTML and XHTML, and I think that caution must be made to make sure that XHTML efforts aren't trying to hold back fuller support for "pure" XML based solutions as some sort of crutch or security blanket.
--Kynn Bartlett on the XHTML-L mailing list.)
--Norman Walsh on the xml-dev mailing list
Should a namespace URI reference an actual resource (such as a schema definition) or not?
In the end, it seems to me that there is only one reasonable answer: It can be if it wants to be.
--Seairth Jacobs on the xml-dev mailing list
XHTML is at best a transitory language which will fade out of use within a few years, replaced by something much better. It may linger on for a while as a necessary item to consider for backward's compatibility's sake -- in other words, it will take the place currently held by HTML 3.2 or so.
--Kynn Bartlett on the XHTML-L mailing list Carlisle on the xsl-list mailing list
Clearly, for the future of both mobile Internet and mobile voice communication, telephones have no benefits and many downsides. The telephone has served us well for 100 years. It is time for it to go.
--Jakob Nielsen
Read the rest in Mobile Phones: Europe's Next Minitel? (Alertbox Jan. 2001)
I personally find the separation of presentation from (content? structure? whatever) to be very artificial and forced; it's a distinction that exists primarily because people dogmatically _want_ it to exist and not because it is natural.
--Kynn Bartlett on the XHTML-L mailing list
<rant subject="namespace kvetching" frequency="every 6 months or so"> All attempts to assign meaning to namespace names (which are URI references) are ex post facto and irrelevant to the aims of the namespace recommendation, which is to make names unique for practical purposes in the Internet context. This is a useful thing to do, and the namespace recommendation does it.
Once there is some general agreement as to what kinds of semantics one might expect to attach to namespaces, and what mechanisms prove to be the best for expressing those semantics, then it will be possible to have a useful debate about the meaning of namespace identifiers. In the advance of such agreement, the debate has been, and continues to be, an outpouring of hot air which could be put to better use this winter in helping alleviate energy shortages. </rant>
--Tim Bray on the xml-dev mailing list
I would humbly suggest that it might be reasonable at this point to put "namespaces mean X because the namespaces spec says so" into the same category as "one must keep servants because all respectable people do so."
--Simon St.Laurent on the xml-dev@lists.xml.org mailing list
Yes, Virginia, there is a Semantic Web. A little bit of it exists in each of us ...
--David Megginson on the xml-dev mailing list
|
http://www.cafeconleche.org/quotes2001.html
|
crawl-002
|
refinedweb
| 18,084
| 59.74
|
The aim of the Portable Document Format is noble. Every page should look exactly the same on any platform, regardless of user settings. If a user were to view a certain web page on my computer and then switch the resolution, the change could be quite significant. Likewise, if a user viewed a web page or some other sort of document on Windows and then switched to Linux, things might also look very different. This is fine for a lot of things, but when pages need to be formatted in a precise way, as in books or user manuals, it becomes a problem. It is, however, a problem that can easily be solved by creating PDF documents for things like these.
PDF documents are fairly easy to create today. All someone has to do is click a button or a menu option in his favorite word processing application, or he can simply print a page to a PDF file. However, there are certain situations where this cannot be done, such as when a PDF document needs to be generated dynamically. This is where programming comes in. There are a number of libraries designed to work with a number of languages to generate PDF documents. This article will examine the ReportLab Toolkit for Python.
Obtaining the ReportLab Toolkit
The ReportLab Toolkit may be obtained from the ReportLab website:
Extract the archive and then run the install script:
$ python setup.py install
If you plan on working with images, you’ll also need the Python Imaging Library (PIL):
If you’re on Windows, just download and run the binary installer. Otherwise, download the source, extract it and run the installation script:
$ python setup.py install
Again, you’ll need the proper permissions.
{mospagebreak title=Putting Virtual Ink to Virtual Paper}
Now that the ReportLab Toolkit has been installed, we can begin using it immediately. Fire up Python’s interactive interpreter, and let’s get started. The first step is to import canvas from the pdfgen module:
>>> from reportlab.pdfgen.canvas import Canvas
Since the ReportLab Toolkit is set up to use the A4 paper size by default, North American developers will have to perform an extra step to gain access to the standard 8.5″ by 11″ letter size:
>>> from reportlab.lib.pagesizes import letter
The pagesizes module also contains various other paper sizes in case you ever need to use them.
Next, we have to create the PDF document in the form of a Canvas object. It requires a filename argument, which may be an absolute or relative path (based in the current working directory):
>>> pdf = Canvas(“test.pdf”)
This will create an A4 page. However, as I mentioned before, not everyone will want an A4 page. To create a page based off of the letter page size, we must set pagesize:
>>> pdf = Canvas(“test.pdf”, pagesize = letter)
Throughout this article, I’ll be using the letter page size simply because this is what I’m familiar with. In your own scripts, you’re free to use whatever you’re accustomed to.
We can now draw something in our PDF document. A string of text is probably the easiest place to start. We’ll go ahead and draw one using the Courier font in red:
>>> pdf.setFont(“Courier”, 12)
>>> pdf.setStrokeColorRGB(1, 0, 0)
>>> pdf.drawString(300, 300, “CLASSIFIED”)
An important thing to note here is that when specifying coordinates, the origin is in the lower left hand corner of the page, rather than the top left. It’s also possible to specify measurements in other units. You can use centimeters, millimeters, inches and picas. The default unit of measurement is a point, equal to one seventy-second of an inch. The extra measurements are available from reportlab.lib.units:
>>> from reportlab.lib.units import cm, mm, inch, pica
To use the measurements, simply multiply them by however many units you want. Let’s go ahead and draw a string of text one inch above the bottom of the page and one inch to the right of the page:
>>> pdf.drawString(2 * inch, inch, “For Your Eyes Only”)
Now that we have some text on the page, let’s close the page:
>>> pdf.showPage()
The showPage method closes the current page. Any further drawing will occur on the next page, though if all drawing has ended, another page will not be added. We now have to save the PDF document:
>>> pdf.save()
The ReportLab Toolkit saves our page, which you can now view. The first thing you’ll notice is that it’s rather ugly and blindly formatted. The text would look a lot better if it were centered, which is perfectly possible with the drawCentredString method (notice the British English spelling) and a bit of math. The drawCentredString method draws the text with its center on the given x-coordinate, which makes centering text easy since we only have to calculate the center of the page. We’ll also change the font size (which, by the way, has been reset along with the font face and color since we started on a new page):
>>> pdf.setFont(“Courier”, 60)
>>> pdf.setFillColorRGB(1, 0, 0)
>>> pdf.drawCentredString(letter[0] / 2, inch * 6, “CLASSIFIED”)
>>> pdf.setFont(“Courier”, 30)
>>> pdf.drawCentredString(letter[0] / 2, inch * 5, “For Your Eyes
Only”)
>>> pdf.showPage()
>>> pdf.save()
There, the result now looks slightly more pleasing.
{mospagebreak title=Text Formatting Techniques}’s
men”)
>>>
Couldn))
>>> pdf.build(story)
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import inch
from
{mospagebreak title=Using Graphics}
Drawing graphics in a PDF isn’t very difficult with the ReportLab Toolkit. Let’s go back to the pdfgen module and draw a few shapes on the page. Go ahead and create a new canvas to work with:
>>> pdf = Canvas(“graphics.pdf”, pagesize = letter)
We’ll now draw a line that spans across the top of the page, leaving an inch on its left, right and top:
>>> pdf.line(inch, inch * 10, inch * 7.5, inch * 10)
We have the freedom to choose whatever colors we want for graphics. Here, we set the stroke color (the color outlining an image) to black and the fill color to lime green:
>>> pdf.setStrokeColorRGB(0, 0, 0)
>>> pdf.setFillColorRGB(0, 1, 0)
Using these colors, we can draw shapes. Notice how we have the option of specifying whether or not we want to stroke or fill the image being drawn. If, however, we choose to omit these variables, the shape will be stroked but not filled:
>>> pdf.rect(inch, inch, inch * 2, inch * 2, stroke = True, fill
= True)
>>> pdf.circle(inch * 5, inch * 5, inch)
>>> pdf.circle(inch * 5, inch * 5, inch * .5, False, True)
Save the page and then take a look at the result:
>>> pdf.showPage()
>>> pdf.save()
If the shape that you want isn’t available, you can also draw it within the ReportLab Toolkit yourself, using paths. With paths, it’s possible to draw lines from point to point to construct a shape. After that, you can stroke and fill the shape. Let’s set colors, first. Our shape will be stroked with blue and filled with red:
>>> pdf.setStrokeColorRGB(0, 0, 1)
>>> pdf.setFillColorRGB(1, 0, 0)
Next, we have to create a path object to be used:
>>> path = pdf.beginPath()
Before we begin drawing, let’s move it to a starting point:
>>> path.moveTo(inch * 4, inch * 4)
Now let’s get to the drawing part. We’ll create three lines that form a triangle by using the lineTo method:
>>> path.lineTo(inch * 3, inch * 4)
>>> path.lineTo(inch * 3.5, inch * 5)
>>> path.lineTo(inch * 4, inch * 4)
When we’re done with a path, we simply have to draw it. We can specify whether we want a stroke or a fill or both:
>>> pdf.drawPath(path, True, True)
Save the page and take a look at the triangle:
>>> pdf.showPage()
>>> pdf.save()
We’re not limited to drawing everything by hand. It’s also possible to draw existing images into a PDF document. For example, let’s draw the DevShed symbol on a page:
Images can be drawn using the drawImage method of a canvas object, which also returns the image dimensions:
>>> pdf.drawImage(“devshed.jpg”, inch, inch * 10)
(34, 24)
>>> pdf.showPage()
>>> pdf.save()
If you’re using Platypus, you’ll want to use the Image flowable object instead:
>>> from reportlab.platypus import Image
>>> pdf = SimpleDocTemplate(“logoDoc.pdf”)
>>> pdf.build([Image(“devshed.gif”)])
Conclusion
The Portable Document Format is very popular because of its ability to render pages that look exactly the same in many environments. There are many libraries out there that deal with generating PDF documents dynamically, and the ReportLab Toolkit is one of those libraries. In this article, we examined the low-level pdfgen module, which allows text and images to be positioned precisely on a page. While this approach is fine for many purposes, it becomes impractical when dealing with larger amounts of content. For those situations, Platypus is the tool of choice. It takes care of things such as word wrapping and page breaks for us, allowing us to spend more time on other things.
You should now be familiar with the basics of the ReportLab Toolkit. From here, try to create scripts that generate dynamic PDF documents from data sources, such as text files (which we took a look at already, though you can certainly attempt to improve our script) and databases.
2 thoughts on “Python for PDF Generation”
|
http://www.devshed.com/c/a/Python/Python-for-PDF-Generation/
|
CC-MAIN-2016-40
|
refinedweb
| 1,588
| 65.83
|
I'm new to Python. I wonder how to get the string from an html as following:
<span style="color: blue; font-size: 36px; font-weight: 600;"> string </span>
import lxml from html
import requests
page = requests.get("url")
tree = html.fromstring(page.content)
Be careful with tree searching html files because seldomly developers will move things which ends up breaking old projects. I feel its safer to go with string manipulation because if you plan it well, you won't have to reprogram it even if the developers decided to wrap your target in one more containers.
It's crazy how much you can accomplish simply with just the split function.
text = your_string.split(">")[1].split("<")[0]
Here are a couple tools I made and like to use when getting a string when I know what will be before it and after it.
def get_str_between(s, before, after): # gets a substring between two strings in a string in python # by: Cody Kochmann string </span>' print get_every_str_between(target, ">", "</")
[' string ']
|
https://codedump.io/share/W1gwJtjtBV8G/1/how-to-tree-search-an-html-file-tag-with-no-specific-type
|
CC-MAIN-2017-04
|
refinedweb
| 169
| 69.11
|
Lets say you have four array objects.
test[1] = new test();
test[2] = new test();
test[3] = new test();
test[4] = new test();
then you have your class:
public class test()
{
public int i; //initialize some fields
public int getnumber()
// what is the array number of the this object? 1, 2, 3, or 4???
}
so basically, if an object is called: test[1].getnumber how can this method internally determine what object array number it is currently in? (without passing an integer from the page)
it can be either 1, 2,3, or 4. All,
Is it possible to convert the Report Document object as a byte array and store it in SQL , later retrieve and assing it to a report ?
I'm getting this error, what i'm missing?
error: An object reference is required for the nonstatic field, method, or property 'object.GetType()'
sb.Append(
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/46785-how-do-you-get-object-array-number-method.aspx
|
CC-MAIN-2018-13
|
refinedweb
| 162
| 64.61
|
This is the fourth in a new advanced series of posts written by Imanol P signature of a path can be a powerful tool to tackle machine learning problems. The previous article showed an application to a toy problem, but as we will see in this article rough path theory and signatures can also be used in more practical applications.
More specifically, we will build a model that predicts, and quite accurately in fact, which country a company belongs to using the evolution of its stock price and traded volume.
Obviously, the way the price of a stock changes over time varies from company to company. If a company is profitable and investors think it will still be profitable in the future, the price of its stock will probably rise. If, on the other hand, a company is near bankruptcy it is quite likely that the price will fall.
However, intuitively it makes sense (and this is in fact what one observes in real life) to expect that outside factors that affect the country as a whole will also have an impact on the price of a stock. Therefore, it is reasonable to think that prices of stocks that trade on the same country will have some intrinsic similarities, that go beyond the performance of each particular company.
The objective of this article will be to capture these similarities using signatures. We will make use of two main ingredients for this task: the daily price of the stock over a year, and the daily volume over the same period of time. We will consider stocks of three different countries: United States, United Kingdom and Germany. As we will see, the model can correctly predict the country a stock belongs to with an accuracy of 97%.
The companies that were used can be found in this file. We shall first load the data from each company:
import pandas as pd import pandas.io.data as web # We will consider data from 2016. start = datetime.datetime(2016,1,1) end = datetime.datetime(2017,1,1) # Load data from each company. data=[] for country in tickers: print("Loading companies from "+country+"...") for company in tickers[country]: companyData=getData(company, start, end) # If the company doesn't have any data, ignore it. if len(companyData)==0: continue data.append(Stock(companyData, country)) print("Done.")
where the class
Stock is given by the following:
class Stock: ''' Class that contains information about a stock, that will later be used. ''' def __init__(self, data, country): # Store the stream of data. self.data=np.array(data, dtype='float32') # Store the country the stock belongs to. self.country=country # Since the output to train the model must # be a vector, each country will be given # by a point, which is calculated using # the function country_to_point. self.point=country_to_point(country) def country_to_point(country): ''' Converts a country into a point ''' dictionary={"US": (1,0), "UK": (-1, 0), "DE": (0,1)} return dictionary[country]
and the function
getData is defined as follows:
def getData(ticker, start, end): ''' Gets data from the specified ticker, for a set time period. ''' stock = web.DataReader(ticker, "google", start, end) values=stock[["Close", "Volume"]].reset_index().values for i in range(len(values)): values[i][0]=string2datenum(str(values[i][0]), "%Y-%m-%d %H:%M:%S") return values
Then, we divide the data into two subsets: the training set, which has 70% of the data, and the testing set, with the remaining data. The training set will be used, as the name suggests, to train the model. The testing set, on the other hand, will be useful to see how accurate our model is with out-of-sample data.
# We randomly divide the dataset into two subsets: # the training_set, which has the 70% of the data, # and testing_set, with the remaining 30%. shuffle(data) training_set=data[0:int(0.7*len(data))] testing_set=[company for company in data if company not in training_set]
We may now construct the inputs and outputs that will be used to train and test the model:
# The inputs and outputs to train the model are constructed. inputs=[company.data for company in training_set] outputs=[company.point for company in training_set] # Inputs and outputs to test the model are built. inputsTEST=[company.data for company in testing_set] outputsTEST=[company.point for company in testing_set]
Finally, we run the model for different order of signatures and test the accuracy:
# We apply the model for signature orders 1 to 4. for signature_order in range(1, 5): # The model is trained. model=sigLearn.sigLearn(order=signature_order) model.train(inputs, outputs) # We calculate the predictions. predictions=model.predict(inputsTEST) # We check the accuracy of our predictions, # and print it then. print(accuracy(predictions, outputsTEST))
The whole code can be found in this repository.
If we test the model for different signature orders, we see that the best results are obtained for the signature of order 4 (see Figure 1). In this case, the model correctly predicted which country a company belongs to with an accuracy of 97%.
Fig 1 - Accuracy of the.
|
https://www.quantstart.com/articles/rough-path-theory-and-signatures-applied-to-quantitative-finance-part-4
|
CC-MAIN-2018-05
|
refinedweb
| 845
| 53.61
|
split app-startup from core appshell functionality
RESOLVED FIXED in mozilla1.7final
Status
P1
blocker
People
(Reporter: benjamin, Assigned: benjamin)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(6 attachments, 5 obsolete attachments)
I'm moving appshell out of tier 9, and need to keep nsIWindowWatcher and implementation in tier 9, for xpinstall and other random crap (ugh, but I don't have time/inclination to fix the interface to not suck). embedding/components/windowwatcher is the logical destination. cls/bz, does this sound OK to you?
This is danm's thing more than mine...
WindowMediator is very much an appshell thing. appshell is a good place for it because it lives and breathes XUL Window. embedding/components is a bad place for it because, if for no other reason, that would require a compilation dependency on appshell in embedding/components. WindowMediator belongs in appshell. If I understand you correctly, I think what you actually want to do is consider making all the files spread out all over the codebase outside of appshell which use WindowMediator, use WindowWatcher instead. And be sure to check in with me before you get the inclination to make one of these interfaces not suck, whichever one you were talking about.
ok, here's the big picture: the startup sequence for firefox is changing dramatically: overview is at This means that we need to fork nsIAppShellService.idl in the new toolkit, because the current interface makes no sense for the new toolkit startup sequence. So xpfe/appshell and toolkit/appshell are going to be forked and live in tier 50 (the toolkit). However, there are some interfaces which currently live in appshell that need to be in tier 9, because gecko internals depend on them: nsIWindowMediator is used xpinstall for the xpinstall UI (a dependency that could probably broken) and DOM for window.find (a dependency that might be breakable, but I don't know enough to say). This involves some dicing-up of appshell. We can put the nsIWindowMediator and nsIXULWindow interfaces in embedding/components but leave the implementations (forked) in xxx/appshell... or we can break the xulwindow dependency on nsiwebshellwindow and move the windowmediator and xulwindow impls into embedding/components. (I also need to move nsIPopupWindowManager.idl, which is pretty simple because appshell doesn't implement or use it anyway, bug 237744).
Whoa. We're forking appshell? Why? Is it not possible to simply modify appshell to do what's needed? Why not?
Perhaps you could attach your overview to this bug? I can't see it because it "has been temporarily blocked", and as an attachment it'll probably enjoy a longer life (for all future readers to see) than on an external server.
I've read your proposal. Gotta say, replacing the "implementations we don't want to try" paragraph with one that explains why it's a good idea in the first place would have aided my comprehension. I gather this is something you've been working on with Ben for a long time, so me quickly catching up is unlikely. You may have heard, there's been a flurry of email about this project. I've said my piece; I don't much like any of it. Despite my objections, assuming you go ahead with separating the Window part of appshell from the rest, I suggest just making that a new module in whichever build tier you need. I continue to believe that the real fix is to make core code not dependent on XUL appshell code. That's a bad dependency, anyway. embedding/components is a terrible place for XULWindow et.al. Terrible. It goes there over my dead body. And please don't split the interface files away from the implementation. That's an unhappy build hack. And the XUL interface doesn't belong in embedding any more than the implementation. PS The sincerely chrome registry specific methods that are commented "xxxbsmedberg Move me to nsIWindowMediator" only add to my discontent.
AOL network blocks your DNS lookups to my host :( danm: Ben and I talked, and I think we all agree that the goal is to fork as little as possible. Here's the alternative that currently looks best: We keep xpfe/appshell in tier 9 (I would like to move the directory out of xpfe/ at some point, but that's not really important). We split+fork the following stuff into xpfe/components/startup or someplace equivalent: nsICmdLine* nsIAppSupport* and the parts of nsIAppShellService that deal with startup sequencing (these would move into a new interface "nsIXULAppStartup" or something like that): initialize doProfileStartup nativeAppSupport hideSplashScreen createStartupState ensure1Window
This splits appstartup out of appshell. It builds and works in seamonkey. It does *not* work in toolkit yet, but it will be simple to get it working in toolkit once it's reviewed for seamonkey.
Attachment #144676 - Attachment is patch: false
Attachment #144676 - Attachment mime type: text/plain → application/x-gzip
Comment on attachment 144676 [details] split appstartup out of appshell NOTE: IGNORE toolkit changes for the moment please review ONLY the seamonkey portions of this patch... I will attach the toolkit/profile-startup portions separately.
Attachment #144676 - Flags: superreview?(brendan)
Attachment #144676 - Flags: review?(bugs)
Comment on attachment 144676 [details] split appstartup out of appshell NOTE: IGNORE toolkit changes for the moment This is a crappy patch... I've figured out a way to avoid all that toolkit crud.
Attachment #144676 - Flags: superreview?(brendan)
Attachment #144676 - Flags: review?(bugs)
Comment on attachment 144688 [details] updated with, one JS error caught by beng, and without the toolkit cruft This is more reviewable ;) I have only built and tested this on linux... I will build win32 tomorrow, and upload to planetoid to test there.
Priority: -- → P1
Summary: move nsIWindowMediator and implementation from appshell to windowwatcher → split app-startup from core appshell functionality
Target Milestone: --- → mozilla1.7final
[noscript] - void createHiddenWindow(); + void createHiddenWindow(in nsIAppShell aAppShell); + + void destroyHiddenWindow(); hmm, why is destroyHiddenWindow scriptable when createHiddenWindow isn't? + [noscript] void initialize(in long argc, out string argv); whoa. is there a reason why [array, size_is(argc)] was avoided for argv? Yes, you probably just copied it, but could you fix it?
> hmm, why is destroyHiddenWindow scriptable when createHiddenWindow isn't? I don't know why createHiddenWindow is [noscript]... probably historical accident from when nsIAppShell was not a real interface. It doesn't matter... even if you were doing something unusual like bootstrapping a browser from xpcshell, you would have to go through the appstartup component, instead of manually using nsIAppShellService. > + [noscript] void initialize(in long argc, out string argv); > whoa. is there a reason why [array, size_is(argc)] was avoided for argv? Yes, > you probably just copied it, but could you fix it? size_is has to be an unsigned long. It's a stupid signature, as is nsIAppShell.Init, but let's fix that later, shall we? I have a whole boatload of potential cleanup I avoided for this patch. There's a decent amount of deCOMification that can happen in appshell next cycle, as well.
update: I had to make a few more changes for win32 and mac; I moved building the activex control from tier 9 to tier 99 (it is now built with the rest of the embedding "clients" like mfcembed). There were errors in xpfe/components/build/Makefile.in where I was overriding makefile vars using = instead of +=. And there were a couple places in mac-specific code where I needed to do an nsIAppShellService->nsIAppStartup conversion (nsNativeAppSupportMac and the appleevents code). And I had to change appstartup to use a threadsafe isupports implementation, because we proxy it in a few places. I can post a new patch, but none of those changes were major or dangerous... also, if we can reach agreement on which files are going to move, I can go ahead and get the files copied in CVS, then redo the patch with only the changed portions of those files, which ought to make detailed reviewing a lot easier.
This patch is readable, but not apply-able... I put the files which need to be moved back in their original locations and took diffs from there.
Attachment #144688 - Attachment is obsolete: true
Index: xpfe/bootstrap/nsNativeAppSupportWin.cpp + rv = compMgr->CreateInstanceByContractID(NS_COMMANDLINESERVICE_CONTRACTID, + nsnull, + NS_GET_IID(nsICmdLineService), + (void**)aResult ); why not use CallCreateInstance while you're here?
It seems like you should have changes corresponding to the client.mk activex changes to embedding/browser/activex/src/Makefile.in , but I don't see any in the patch. (Also, in the client.mk activex changes, the activex comment is dangling a few lines after the activex stuff.)
Comment on attachment 145518 [details] [diff] [review] Readable, but not apply-able What biesi and dbaron said, plus my terse list o' issues: Factory, etc. CamelCaps in nsComposerCmdLineHandler.js #ifndef MOZ__THUNDERBIRD in msgMapiHook.cpp nsIAppStartup.idl createHiddenWindow comment cut off nsAppStartup::OpenBrowserWindow's body is overindented one c-basic-offset Get mscott to look at the MOZ_THUNDERBIRD changes and test 'em if you haven't. Otherwise, the main thing is to test all the apps before landing. /be
Nits picked, except the one about MOZ__THUNDERBIRD which I didn't understand. This builds with tbird, and mscott said that I could land little mail/mailnews changes like this without further review.
Clarification: I added that tbird #ifdef because tbird doesn't have quicklaunch, and will always have a profile when processing the eventloop. It therefore doesn't need any special logic to "ensure" that there is a profile selected. This patch needs to land together with the semi-single-profile work, because this patch only covers the xpfe tine of the fork.
Also, I had to revert back to compMgr->CreateInstanceByContractID instead of using CallCreateInstance... that function links directly to the deprecated nsComponentManager:: symbols, which won't work when we're using the XPCOM glue in seamonkey.
Comment on attachment 148129 [details] [diff] [review] app-startup, revision 4 New, reduced trunk-only patch coming up? /be
Alias: app-startup
It's revived! This should build, with the same symlinks listed earlier in this bug. I have not tested the changes to nsNativeAppSupportWin.cpp yes, that code is untested but will be tested soon. Also I didn't build tbird, though the changes there are minor. Will build before checkin ;) Who wants to review? darin should, danm or dveditz, are you up to looking this over?
Comment on attachment 160387 [details] [diff] [review] app-startup, revision 5 >Index: editor/ui/nsComposerCmdLineHandler.js >+ _factory: { >+ createInstance: function(outer, iid) { >+ if (outer != null) { >+ throw R.NS_ERROR_NO_AGGREGATION; >+ } >+ >+ return new nsComposerCmdLineHandler(); return new nsComposerCmdLineHandler().QueryInterface(iid); more to come...
I think you probably mean return (new nsComposerCmdLineHandler()).QueryInterface(iid); or something similar.
shaver, darin had it right: return new nsComposerCmdLineHandler().QueryInterface(iid); ECMA-262 goes out of its way to make this work. See the productions NewExpression : MemberExpression new NewExpression and MemberExpression : new MemberExpression Arguments and CallExpression : MemberExpression Arguments Once a new foo(args) has been reduced to a MemberExpression, subsequent . or [] operators use that MemberExpression as left operand. No need to parenthesize the new expression with arguments. /be /be
Well, blow me over. Apologies!
moreover, it works just fine in practice. i use that pattern all the time ;)
just a bunch of nits
> why is this interface being forked? why don't we make seamonkey > inherit the interface from toolkit? or do you mean to remove the Actually, forking the interface was the whole point of this exercize. The many methods for dealing with the splash screen, profile migration, and command-line-handling need to change (rapidly and radically, as we've discussed), and rewriting the seamonkey profile migrator is likely to be the very last step in porting it to toolkit. We should consider xpfe/components/startup a short-term solution until seamonkey gets its porting in gear.
> We should consider xpfe/components/startup a short-term solution until seamonkey > gets its porting in gear. ok, works for me.
>Also, I had to revert back to compMgr->CreateInstanceByContractID instead of >using CallCreateInstance... that function links directly to the deprecated >nsComponentManager:: symbols, which won't work when we're using the XPCOM glue >in seamonkey. you could make CallCI use NS_GetComponentManager! ;) (quoting review comment) >>+ url.AssignWithConversion(urlToLoad); > nit: use CopyASCIItoUTF16 instead Absolutely! please do that :) (or AssignASCII, given your IsASCII check) > splash screen will toolkit offer some way to show a splash screen? aiui it doesn't have a way now. (quoting patch v5) client.mk +# Configure + +configure:: $(OBJDIR)/Makefile why this dependency? configure creates the makefile surely, not depends on it? ah... so "make -f client.mk" would trigger a configure run if needed? nsComposeCmdLineHandler.js: +const R = Components.results; hm, I don't think this is common... I'm not sure it increases readability + appStartup->Quit(nsIAppStartup::eAttemptQuit); app _startup_ quits the app? heh. nsSystemPref.cpp + if (!gSysPrefLog) return NS_ERROR_OUT_OF_MEMORY; please put the return on a second line profile/build/Makefile.in -GRE_MODULE = 1 (also public/) ? profileSelection.js: + var appStartup = Components.classes["@mozilla.org/seamonkey/app-startup;1"]. + getService(Components.interfaces.nsIAppStartup); usual style is: + var appStartup = Components.classes["@mozilla.org/seamonkey/app-startup;1"] + .getService(Components.interfaces.nsIAppStartup); with the dot from getService under the dot from .classes, which I'm sure I did wrong in this comment Yeah... why kill nsAppShellCIDs.h? put the contracts there, rather that into the idl :) nsIAppShellService.idl: nsIXULWindow createTopLevelWindow(in nsIXULWindow aParent, this function has documentation (hard to see in the patch), mind updating it to explain your new nsIAppShell arg? nsXULWindow + obssvc->NotifyObservers(nsnull, "xul-window-visible", nsnull); nice, this could probably be used by gnome's libappstartup-notification (or whatever that's called) xpfe/bootstrap/nsAppRunner.cpp + do_GetService(NS_COMMANDLINESERVICE_CONTRACTID,&rv); missing space before &rv xpfe/browser/src/nsBrowserInstance.cpp + rv = pIProxyObjectManager->GetProxyForObject(NS_UI_THREAD_EVENTQ, NS_GET_IID(nsIAppStartup), + appStartup, PROXY_ASYNC | PROXY_ALWAYS, + getter_AddRefs(appStartupProxy)); while you are touching this line, you could fix its indentation... xpfe/components/build/Makefile.in $(DIST)/lib/$(LIB_PREFIX)related_s.$(LIB_SUFFIX) \ + ../startup/src/$(LIB_PREFIX)appstartup_s.$(LIB_SUFFIX) \ hmm, why is this not using $DIST/lib? > Reviewers made me add this comment. hehe. xpfe nsAppStartup.cpp: + NS_ASSERTION(0, "Failed to get a platform charset"); make that NS_ERROR, please + printf("default args: %s\n", NS_ConvertUCS2toUTF8(defaultArgs).get()); maybe UCS2 -> UTF16 + nsCOMPtr<nsIInterfaceRequestor> thing(do_QueryInterface(newWindow)); + if (thing) + thing->GetInterface(NS_GET_IID(nsIWebBrowserChrome), (void **) _retval); CallGetInterface? (sorry if all this is copied. it shows with + in the patch! ;) ) + var cmdLineService = Components.classes[ "@mozilla.org/app-startup/commandLineService;1" ] the appstartup service has /toolkit/ in its contractid, but cmdline is in /app-startup/? why not put both into /app-startup/, the service as /app-startup/service;1 or something? toolkit/xre/nsINativeAppSupport.idl + * The interface provides these functions: now that's an interesting way to document the methods. anyway, not your fault. but maybe you want to fix it anyway? ;) (i.e. put the comments for each function directly in front of it)
> you could make CallCI use NS_GetComponentManager! ;) Followup bug 263360 filed. > will toolkit offer some way to show a splash screen? aiui it doesn't have a way now. No, unless there is great wailing and gnashing of teeth. > (quoting patch v5) > client.mk > +# Configure > + > +configure:: $(OBJDIR)/Makefile > > why this dependency? configure creates the makefile surely, not depends on it? > ah... so "make -f client.mk" would trigger a configure run if needed? Actually, configure is a "fake" target, so that I can get client.mk to run configure without doing any more building steps. e.g. "make -f client.mk configure" > profile/build/Makefile.in > -GRE_MODULE = 1 > (also public/) > > ? Yes. nsIProfile.idl, though frozen, should not be part of the GRE, since it is not part of the toolkit, only part of seamonkey. > xpfe/components/build/Makefile.in > $(DIST)/lib/$(LIB_PREFIX)related_s.$(LIB_SUFFIX) \ > + ../startup/src/$(LIB_PREFIX)appstartup_s.$(LIB_SUFFIX) \ > > hmm, why is this not using $DIST/lib? Because I am not exporting this lib. Basically, I want to stop exporting a lot of the intermediate libs that we use to build larger libs. It helps save a lot of disk space on win32, and it's gratuitous. > +.
(In reply to comment #35) > > ah... so "make -f client.mk" would trigger a configure run if needed? > > Actually, configure is a "fake" target, so that I can get client.mk to run > configure without doing any more building steps. e.g. "make -f client.mk configure" whoops, that's what I meant to write. but it will not always run configure, right? maybe that'd be more useful? > Because I am not exporting this lib. Basically, I want to stop exporting a lot > of the intermediate libs that we use to build larger libs. It helps save a lot > of disk space on win32, and it's gratuitous. ah, makes sense. > > +. (rv = ) CallGetInterface(thing.get(), _retval); (I'm not quite sure whether the .get() is needed...) oh, nevermind. that won't help with a void**.
Comment on attachment 160387 [details] [diff] [review] app-startup, revision 5 >+const C = Components.classes; >+const I = Components.interfaces; >+const R = Components.results; These aren't prevalent style. Declaring a particular interface or result, e.g. const nsISupports = Components.interfaces.nsISupports is preferred. >+ _CID: Components.ID("{f7d8db95-ab5d-4393-a796-9112fe758cfa}"), >+ _contractIDPrefix: "@mozilla.org/commandlinehandler/general-startup;1?type=", These should be top-level consts. The scope is private to the component anyway. >+ _factory: { >+ createInstance: function(outer, iid) { >+ if (outer != null) { >+ throw R.NS_ERROR_NO_AGGREGATION; >+ } >+ >+ return new nsComposerCmdLineHandler(); >+ } This could be a top-level object too. >+ if (!iid.equals(I.nsICmdLineHandler) && >+ !iid.equals(I.nsISupports)) { >+ throw R.NS_ERROR_NO_INTERFACE; >+ } You braced this but not the one above... I'm no fan of braces but this is a new file so you get to choose the style, as long as you're consistent. >+ get defaultsArgs() { Nit: should be defaultArgs. While I mention it, you could probably use JS properties rather than writing each getter out, it's not as if anyone's going to be able to change these values. >+ catMan.deleteCategoryEntry("command-line-argument-handlers", >+ "nsComposerCmdLineHandler", true); Looks like this will be the first command line handler to unregister itself correctly! >+ rv = compMgr->CreateInstanceByContractID( >+ NS_COMMANDLINESERVICE_CONTRACTID, >+ nsnull, NS_GET_IID(nsICmdLineService), >+ (void**) aResult); I don't claim to have followed the create instance debate but I assume at some point this will become rv = CallCreateInstance(NS_COMMANDLINESERVICE_CONTRACTID, aResult); >+ nsCOMPtr<nsIInterfaceRequestor> thing(do_QueryInterface(newWindow)); >+ if (thing) >+ thing->GetInterface(NS_GET_IID(nsIWebBrowserChrome), (void **) _retval); Bah, why is it you can do nsCOMPtr<nsIWebBrowserChrome> wbc = do_GetInterface(newWindow) but not CallGetInterface(newWindow, _retval); :-/
> oh, nevermind. that won't help with a void**. actually do mind, as _retval is NOT a void** - so CallGetInterface should work
Fixed on trunk. Onwards to command-line handling!
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
>+ if (!iid.equals(I.nsICmdLineHandler) && >+ !iid.equals(I.nsISupports)) { >+ throw R.NS_ERROR_NO_INTERFACE; >+ } Bah, I failed to notice the bad indenting of "throw" :-[ >+ get defaultsArgs() { >+ return "about:blank" >+ }, I was hoping for defaultArgs: "about:blank", :-/
Doh, I also overlooked this :-[ >+ var appStartup = Components.classes["@mozilla.org/toolkit/app-startup;1"]. >+ getService(Components.interfaces.nsIAppStartup); which should of course line up .getService with .classes
On top of the build bustage fixes that dbaron already checked in this gets it working again for OS/2. Can someone check this in?
The Thunderbird windows build is still busted on Tinderbox from these changes. Can you take a look? Re-opening until the bustage clears...Thanks!
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Not trying to be a pest, but the Thunderbird windows trunk build is still in flames from this checkin. It's been a couple days: any chance of an ETA on when we can expect to get Windows builds again? Thanks! -Scott
This change also busted the Sunbird build process. I get the same error message as the WINNT 5.0 patrocles Clbr Tbird tinderbox. Although it is not directly related to this bustage, the change to splash.rc (and other changes as appropriate) should probably also be made for Sunbird.
As this is causing "red" on patrocles and prevents cvs builds from compiling Thunderbird and Sunbird changing to "blocker".
Severity: normal → blocker
(In reply to comment #47) > Created an attachment (id=164125) > MOZ_XUL_APP winhooks bustage fix With this fix in my Thunderbird tree, I now get :- Creating Resource file: module.res /cygdrive/e/mozilla/source/thunderbird/mozilla/build/cygwin-wrapper windres -O c off --use-temp-file -DMOZ_THUNDERBIRD --include-dir /cygdrive/e/mozilla/source/t hunderbird/mozilla/mail/app -DTHUNDERBIRD_ICO=\"../../dist/branding/thunderbird. ico\" -DOSTYPE=\"WINNT5.1\" -DOSARCH=\"WINNT\" -DAPP_VERSION=\"0.6+\" -DBUILD_ID =\"0000000000\" --include-dir ../../dist/include/string --include-dir ../../dist /include/xpcom --include-dir ../../dist/include/xulapp --include-dir ../../dist/ include/xpinstall --include-dir ../../dist/include/appshell --include-dir ../../ dist/include --include-dir ../../dist/include --include-dir ../../dist/include/n spr -o module.res /cygdrive/e/mozilla/source/thunderbird/mozilla/mail/app/module .rc e:/mozilla/source/thunderbird/mozilla/mail/app/module.rc:84:35: nsNativeAppSuppo rtWin.h: No such file or directory e:\gnu\mingw\bin\windres.exe: e:\gnu\mingw\bin\gcc exited with status 1 make[4]: *** [module.res] Error 1 make[4]: Leaving directory `/cygdrive/e/mozilla/source/thunderbird/mozilla/mail/ app' make[3]: *** [libs] Error 2 make[3]: Leaving directory `/cygdrive/e/mozilla/source/thunderbird/mozilla/mail' make[2]: *** [libs] Error 2 make[2]: Leaving directory `/cygdrive/e/mozilla/source/thunderbird/mozilla' make[1]: *** [alldep] Error 2 make[1]: Leaving directory `/cygdrive/e/mozilla/source/thunderbird/mozilla' make: *** [alldep] Error 2
Just having a trunk Seamonkey Mail window with no Navigator window, trying to _get_ a Navigator window fails, and yields: Error: uncaught exception: [Exception... "Could not convert JavaScript argument (NULL value cannot be used for a C++ reference type) arg 0 [nsISupports.QueryInterface]" nsresult: "0x8057000b (NS_ERROR_XPC_BAD_CONVERT_JS_NULL_REF)" location: "JS frame :: chrome://communicator/content/tasksOverlay.js :: OpenBrowserWindow :: line 120" data: no] Warning: reference to undefined property Components.interfaces.nsICmdLineHandler Source File: chrome://communicator/content/tasksOverlay.js Line: 120 Trying to quit the app via File | Quit fails as well, and yields: Error: uncaught exception: [Exception... "Component returned failure code: 0x80570018 (NS_ERROR_XPC_BAD_IID) [nsIJSCID.getService]" nsresult: "0x80570018 (NS_ERROR_XPC_BAD_IID)" location: "JS frame :: chrome://global/content/globalOverlay.js :: goQuitApplication :: line 23" data: no] Warning: reference to undefined property Components.interfaces.nsIAppStartup Source File: chrome://global/content/globalOverlay.js Line: 23
David, does adding LOCAL_INCLUDES += -I$(topsrcdir)/toolkit/xre to mail/app/Makefile.in fix the bustage? My VC8 build is busted in addrbook currently, so I couldn't verify if anything further along built before attaching the previous patch.
That fix allows my build to finish OK. (In reply to comment #50) > David, does adding LOCAL_INCLUDES += -I$(topsrcdir)/toolkit/xre to > mail/app/Makefile.in fix the bustage? My VC8 build is busted in addrbook > currently, so I couldn't verify if anything further along built before attaching > the previous patch.
stephend, are you using an installer build? I think I probably forgot to add some .xpt files to the packaging manifests which would cause the error you're describing.
I fixed mail/app/Makefile.in adding the LOCAL_INCLUDES line, and fixed the installer packaging for seamonkey/firefox/thunderbird. I'm going to mark this bug FIXED. If there are additional issues, can you open a separate bug (mark the dependency) so that I can keep track of things?
Status: REOPENED → RESOLVED
Closed: 15 years ago → 15 years ago
Resolution: --- → FIXED
(In reply to comment #52) > stephend, are you using an installer build? I think I probably forgot to add > some .xpt files to the packaging manifests which would cause the error you're > describing. Yes, I am using an installer build, and upgrading to 2004-11-01-04 fixed this. Thanks.
Product: Browser → Seamonkey
I've been crashing on shutdown for a while, in js_PurgeDeflatedStringCache. This fixes it.
(Which was essentially a regression of bug 249737, so I'm just going to check it in. Never mind the reviews. That's what you get for copying an old copy of a file and just checking it in.)
Darin saw the js_PurgeDeflatedStringCache signature too -- thanks for finding the culprit, dbaron. /be
I received: E:/cvs/work/mozilla/toolkit/xre/nsNativeAppSupportOS2.cpp:883: warning: invalid conversion from `const char* const' to `char*' E:/cvs/work/mozilla/toolkit/xre/nsNativeAppSupportOS2.cpp: At global scope: E:/cvs/work/mozilla/toolkit/xre/nsNativeAppSupportOS2.cpp:943: prototype for ` nsresult nsNativeAppSupportOS2::Quit()' does not match any in class ` nsNativeAppSupportOS2' E:/cvs/work/mozilla/toolkit/xre/nsNativeAppSupportOS2.cpp:303: candidate is: virtual void nsNativeAppSupportOS2::Quit() E:/cvs/work/mozilla/toolkit/xre/nsNativeAppSupportOS2.cpp:943: `nsresult nsNativeAppSupportOS2::Quit()' and `virtual void nsNativeAppSupportOS2::Quit()' cannot be overloaded make.exe[4]: *** [nsNativeAppSupportOS2.o] Error 1 make.exe[4]: Leaving directory `E:/cvs/work/mozilla/sbobj/toolkit/xre' make.exe[3]: *** [libs] Error 2 make.exe[3]: Leaving directory `E:/cvs/work/mozilla/sbobj/toolkit' make.exe[2]: *** [tier_50] Error 2 make.exe[2]: Leaving directory `E:/cvs/work/mozilla/sbobj' make.exe[1]: *** [default] Error 2 make.exe[1]: Leaving directory `E:/cvs/work/mozilla/sbobj' make.exe: *** [build] Error 2 building FireFox and Sunbird (clean builds). Is this because the Final build bustage for OS/2 patch has not been commited? Andy P.S. I would test it but as I am building the suite right now (which doesn't have this issue) it will be tomorrow before I can start a build with the patch in place and hours from then before I would be certain if this was a fix for this issue. Therefore I am not reopening unless I test it before I hear back here.
Comment on attachment 163959 [details] [diff] [review] Final build bustage fix for OS/2 This never went in like that.
Attachment #163959 - Attachment is obsolete: true
This was not checked into the the Aviary Branch for OS/2 (neither was 258217). I found issue was NS_IMETHOD Quit; was changed to void Quit(); Changing it back cleans up this error when building Firefox and Sunbird on OS/2 from the Trunk. As I get an error from 258217 I can't say for certain that it fixes everything but as best I can tell it should. Not sure why it was switched to void as the windows version was not changed so I don't know if it was a mistake or intentional but something else wasn't done to complete that change.
Relanded relevant parts of patch for globalOverlay.js and nsExtensionManager.js.in following landing of aviary branch.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=app-startup
|
CC-MAIN-2019-30
|
refinedweb
| 4,403
| 50.73
|
Hi
I get the following compilation error under Linux when compiling with g++ v.4.1.2:
mathplot.cpp: In member function 'void mpFXYVector::SetData(const std::vector<double, std::allocator<double> >&, const std::vector<double, std::allocator<double> >&)':
mathplot.cpp:2385: error: 'wxLogError' was not declared in this scope
I am using mathplot.cpp rev.66. Compiles fine under Windows.
I guess I am missing an include file but there seem to be no other wxWidgets related compilation errors.
Any thoughts please?
David
Concluded that mathplot.cpp fails to compile when precompiled headers our used (compile option -DWX_PRECOMP).
This is probably because inclusion of wx/wxprec.h is commented out in mathplot.cpp rev.66:
// For compilers that support precompilation, includes "wx.h".
#include <wx/window.h>
//#include <wx/wxprec.h>
(Don't understand why this has been done).
So solution for me is to not use precompiled headers (delete -DWX_PRECOMP) from compile command.
I suggest this is a bug.
Best regards
David
|
http://sourceforge.net/p/wxmathplot/discussion/297266/thread/aa3b9bb0
|
CC-MAIN-2013-48
|
refinedweb
| 164
| 54.49
|
It does NOT give and error. But it also does not produce an output.
The script is made to do the following:
The script takes an input file of 4 tab-separated columns:
It then counts the unique values in Column 1 and the frequency of corresponding values in Column 4 (which contains 2 different tags: C and D).
The output is 3 tab-separated columns containing the unique values of column 1 and their corresponding frequency of values in Column 4: Column 2 has the frequency of the string in Column 1 that corresponds with Tag C and Column 3 has the frequency of the string in Column 1 that corresponds with Tag D.
Here is a sample of input:
algorithm-n like-1-resonator-n 8.1848 C algorithm-n produce-hull-n 7.9104 C algorithm-n like-1-resonator-n 8.1848 D algorithm-n produce-hull-n 7.9104 D anything-n about-1-Zulus-n 7.3731 C anything-n above-shortage-n 6.0142 C anything-n above-1-gig-n 5.8967 C anything-n above-1-magnification-n 7.8973 C anything-n after-1-memory-n 2.5866 C
Here is a sample of desired output:
algorithm-n 2 2 anything-n 5 0
The code that I have done is the following:
from collections import defaultdict, Counter") #################### import os import glob folderPath = "Python_Counter" # declare here for input_file in glob.glob(os.path.join(folderPath, 'out_')): with open(input_file, "rb") as opened_file: lemma_sense_freqs = sortAndCount(input_file) output_file = "count_*.csv" writeOutCsv(output_file, lemma_sense_freqs)
My intuition is the problem is coming from the "glob" function.
But, as I said before: the code itself DOES NOT give me an error -- but it doesn't seem to produce an output either.
Can someone help?
|
http://www.dreamincode.net/forums/topic/332406-problem-using-glob-in-my-python-code/page__pid__1922750__st__0
|
CC-MAIN-2016-22
|
refinedweb
| 299
| 57.77
|
Search using special commands
Project description
# Bounce
It’s a keyword search engine, meaning you can configure it to redirect yt to Youtube, so a search like yt weird al would redirect right to Youtube’s search.
1 minute getting started
Install it
$ pip install bounce
Start it:
$ bounce-server
Query it:
$ curl " weird al"
You can also run it using any WSGI server like uWSGI using the included bouncefile.py as the wsgi-file.
Configuration
url configuration
Bounce has a built-in configuration file with generic mappings but you can also create your own that bounce will read when starting by setting the environment variable BOUNCE_CONFIG with a path to your custom configuration python file:
export BOUNCE_CONFIG=/path/to/bounce_config.py
The file must import bounce.core.commands:
from bounce.core import commands
The commands.add() method takes a space separated list of commands and a value:
commands.add("foo bar", "{}")
So, if you called bounce with the input:
foo blammo
It would redirect to:
You could also call it with bar blammo and get the same thing because we set up the command keywords as foo bar so either foo or bar would redirect.
Notice that the value is a python format string.
callback configuration
value can also be a callback:
def callback(q): # manipulat q in some way and then return where you would like to go return '{}'.format(q) commands.add("foo bar", callback)
That makes it so bounce can do all kinds of crazy things.
default configuration
By default, Google is the search engine of choice, so if you don’t start your request with a command, bounce will redirect to Google search with your search string. If you would like to change this just pass default=True to one of your custom commands:
commands.add("keyword", "value", default=True)
Viewing configuration
the command ls will list all the commands bounce supports
Testing
To test locally from the repo:
$ python bounce/bin/bounce-server
That should produce output like this:
* Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger pin code: XXX-XXX-XXX
Which you can then use to test:
$ curl "..."
And that’s it.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bounce/
|
CC-MAIN-2019-51
|
refinedweb
| 389
| 60.95
|
How To make a "live" textbox?
How would I use the ui module (preferably using the built in visual ui editor) to create a ui that has a constantly updating text box? I want to be able to constantly feed my user information as they progress through my text-based-game. (This means I don't want the user to be able to edit the text I show them.)
I can't help with the auto updating text box, but If you don't want the user to be able to edit the text you show them, text view would be the thing to use. You can set it up to look just like a text field if you're going for that.
If you want a scrollable view, similar to the console, then TextView is the way to go. You can set
tv.editable = False
To disable editing.
To append text, you would simply add to the text property:
tv.text += 'You were eaten by a Grue.\n'
In the above, tv is the actual textview object, which you can get by name from your loaded root view.
You might consider implementing a delegate for the textview, which implements
def textfield_did_change(self, textfield): textfield.content_offset = (0, textfield.content_size[1] - textfield.height)
Which forces scrolling to the bottom after each line is added
You guys are the best! That's exactly what I was looking for, thanks for the help.
|
https://forum.omz-software.com/topic/1180/how-to-make-a-live-textbox
|
CC-MAIN-2018-51
|
refinedweb
| 240
| 82.54
|
Please help to understand what I do wrong.
I have in my settings.py :
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
STATIC_URL = os.path.join(PROJECT_ROOT, 'static').replace('\\','')+'/'
{% load static %}
<link rel="stylesheet" type="text/css" href="{% static "/css/table.css" %}">
"GET /var/cardsite/cardsite/static/css/table.css HTTP/1.1" 404 1696
ls -la /var/cardsite/cardsite/static/css/table.css
-rw-r--r-- 1 root root 77 Sep 25 16:15 /var/cardsite/cardsite/static/css/table.css
Read this Django_Docs
You must also have a
STATIC_ROOT option set before you can use the static files, heres some help
add this to your code:
STATIC_URL = os.path.join(PROJECT_ROOT, 'static').replace('\\','')+'/' # Here you can add all the directories from where you want to use your js, css etc STATICFILES_DIRS = [ # This can be same as the static url os.path.join(PROJECT_ROOT, "static"), # also any other dir you wanna use "/any/other/static/path/you/wanna/use", ] # This is the static root dir from where django uses the files from. STATIC_ROOT = os.path.join(PROJECT_ROOT, "static_root")
you will also need to specify it in the
urls.py file, just add the following code to the
urls.py file:
from django.conf import settings from django.conf.urls.static import static urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
after you add this, run the command:
python manage.py collectstatic
This will copy all the statics you need to the static root dir.
|
https://codedump.io/share/MmWzkxkkRcYo/1/receiving-the-404-error-in-getting-of-static-django-files
|
CC-MAIN-2017-04
|
refinedweb
| 244
| 52.97
|
document' we mean a structure of dicts, lists and other primitive types, that can be serialized to JSON or a Python Pickle.
The resulting document can be used in combination with the Django cache layer to create blazingly fast views that do not hit the database. The data can also be synced to a NoSQL store like MongoDB, for consumption by other frameworks, like Meteor (NodeJS based).
If any data changes in the ORM (even if it's on a some deep many-to-many relationship far away from the root object), django-denormalize will automatically trigger a cache invalidation of the root object's document and/or sync the new document to your preferred NoSQL store.
This module also includes special support for content in FeinCMS objects: all regions and content types will be available under a 'content' dictionary.
Example
For example, suppose you have the following models:
class Book(models.Model): title = models.CharField(_("title"), max_length=80) year = models.PositiveIntegerField(_("year"), null=True) authors = models.ManyToManyField(Author) ... class Author(models.Model): name = models.CharField(_("name"), max_length=80) ...
You can write the following class to describe your document collection:
from denormalize.models import DocumentCollection class BookCollection(DocumentCollection): model = Book name = "books" prefetch_related = ['authors']
Let's print all documents:
books = BookCollection() for doc in books.dump_collection(): print doc
Each document will have the following structure:
{ 'id': 42, 'title': u'Cooking for Geeks', 'year': 2010, 'authors': [ { 'id': 18, 'name': u'Jeff Potter', ... } ], ... }
This in itself can be useful, but the real power of django-documentsync lies in its backends. Suppose we want to cache these documents, to avoid hitting the database. We can use these documents in our views, instead of accessing the Django ORM. Backend and view code:
# In models.py from denormalize.backends.cache import CacheBackend backend = CacheBackend() backend.register(books) # In views.py def our_book_view(request, book_id): book_doc = backend.get_doc(books, book_id) if not book_doc: raise Http404("Book not found") return render(request, 'book.html', {'book': book_doc})
Our CacheBackend will try to fetch the book document from the Django cache. If it cannot be found, it will generate the document from the ORM and then store it in the cache.
And best of all: if any data on the Author or Book objects for this book changes, the cache will automatically be invalidated for us! The book_doc we retrieve, will always be up to date.
How does this compare with simply using the Django page cache?
The traditional approach to Django scalability is using the page cache to cache the entire page rendered by the view. This works quite well, but it has two big disadvantages:
- The cache will not automatically be invalidated as soon as the underlying data changes. If you set the page cache time to 60 seconds, it will take up to 60 seconds for a change to be visible on the site.
- This approach does not work well for websites where users can login and see customized content.
In simpler cases, these problems can be worked around by using template fragment caching, as this allows you to cache common regions, and specify which variables should be incorporated into the cache key. But even in our simple Book example, it's not easy to invalidate the cache on changes to Author.
The disadvantages of the django-denormalize approach are:
- You no longer have access to the Django models and its methods in your templates. You are dealing with the raw data. Of course, you can add any extra information you might need in the template by extending the DocumentCollection, or by creating custom template filters to calculate some value.
- Writes by the ORM to models that are included in documents are slower, because they are monitored for changes.
MongoDB backend
The MongoDB backend works quite similar to the CacheBackend:
# In models.py from denormalize.backends.mongodb import MongoBackend backend = MongoBackend( name='mongo', db_name='test_denormalize', connection_uri='mongodb://localhost') backend.register(books)
Because the data is persistent and accessed directly through the MongoDB API, you need to make care to keep it in sync. You can trigger a full one-way sync using the following management command (TODO: currently not implemented yet for the MongoBackend, only for LocMemBackend. Coming soon!):
$ ./manage.py denormalize_sync mongo books
Whenever you update the data through the ORM, the corresponding document will be updated automatically. The backend preserves any extra keys you may have set on the document root in MongoDB. Make sure, however, to not add or change keys on subdocuments created by the driver, because they will be overwritten. In the book example above, it is safe to set doc[).
Creating aggregate collections
Occasionaly you may want to aggregate data from more than one object on the root model. The key differences here are:
- The output documents do not have a 1:1 relation with the input documents.
- Any change on any root object should trigger an update.
Use cases:
- Creating one document with a tree structure of pages or categories to generate a menu.
- Calculating statistics about data stored in an entire table.
- Generating an index document, mapping one field to the ids of the documents where the field has a certain value.
AggregateCollection makes this really easy. The following collection will create an index by tag:
class BookTagIndexCollection(AggregateCollection): model = Book name = 'book_tags' prefetch_related = 'tags' def aggregate(self, key): assert key == 'default' index = {} for book in self.queryset().all(): for tag in book.tags.all(): tagname = tag.name index.setdefault(tagname, set()).add(book.id) return index
FeinCMS support
Django-denormalize has experimental special support for FeinCMS. If you use the special FeinCMSCollection, the content attribute will be set to a dict with all regions represented as lists. All content types are included by default. If you want to follow relations on content types, you need to explicitly define all relations to follow. This will become easier in the future.
Disadvantages, bugs and implementation notes
Bugs and limitations:
- Django-normalize had not yet been extensively tested in real world applications. Expect bugs. And since it's an early beta release, there is no guarantee that the API will not change without warning in the near future.
- Using django-denormalize on models that receive a lot of writes might significantly slow down your application, as every write will trigger database queries to determine the affected documents, and regeneration of the documents that have changes. Keep you view counters and last login timestamps out of the models included in documents! (You might want to move these to a NoSQL store anyway.)
- If you bypass the ORM (raw queries, manage.py dbshell, other applications, etc), django-denormalize cannot detect the changes made to the models. After perform a large batch operation, flush the Django cache, or run a full sync (denormalize_sync management command) to update your NoSQL backend, depending on how you use django-denormalize.
- If syncing to a NoSQL store and the NoSQL database is not available, you will lose the update, it is currently not rescheduled (TODO: implement a transaction log to keep track of changes and whether they have been properly synced or not). You should run a regular full sync in a cronjob.
- Syncing happens only one way. If you want to change data, you need to perform the modification on the ORM side, not a NoSQL side. We do try hard not to overwrite any extra attributes you added in the NoSQL backends.
- A full sync currently does not delete stale objects (TODO)
- Keep the storage limitations of your backends in mind. Memcached can only store objects of up to 1MB, MongoDB has a limit of 16MB. Make sure your documents will not exceed these limits.
Types of projects that would benefit most of django-denormalize:
- Writes are rare and mostly occur due to content updates in the Django admin, like in CMS systems.
- There are a lot more reads than writes, and you want to speed up the read views, while keeping the front-end personalized and responsive to data changes.
- You want to use Meteor to build the front-end side of your application, but do not feel like implementing a CMS in Meteor. Django-denormalize allows you to build the CMS backend using the Django admin and FeinCMS. This was the original reason to start this project, so expect more updates to support this!
- You want to use MongoDB to access/query your data, but prefer to keep your primary data in a traditional, proven, relation database system you have 10 years experience with, because it makes you or your DBA sleep better.
Alternatives
Django-nonrel allows you to use the Django ORM to directly access a NoSQL database, but with limitations. If you do a lot of writes from your front-end views, or want to prevent data duplication, this might be a better solution.
PS: Need another backend? Writing one is quite simple! You only need to override a base class, and implement a few methods.
|
https://bitbucket.org/WoLpH/django-denormalize
|
CC-MAIN-2016-44
|
refinedweb
| 1,498
| 55.13
|
Troubleshooting
- PDF for offline use
-
Let us know how you feel about this
0/250
Can't upgrade to Indie/Business from Trial Account
If you recently purchased Xamarin.iOS and previously started a Xamarin.iOS Trial, you may need to complete the following steps to get this license change picked up by Xamarin Studio or Visual Studio.
- Close Xamarin Studio/Visual Studio
- Remove all files from ~/Library/MonoTouch on Mac or %PROGRAMDATA%\MonoTouch\License\ for Windows
- Re-open Xamarin Studio/Visual Studio and build a Xamarin.iOS project
This should get you up and running. If you continue to have problems, you may want to try an Offline Activation to complete the activation of your workstation.
>Receiving 'Activation Incomplete' Error Message
This issue may occur when using Xamarin.iOS for Visual Studio. To resolve this issue, please send the logs from the following location to contact@xamarin.com.
Log location: %LocalAppData%/Xamarin/Logs
Receiving 'Error Retrieving Update Information' Error Message
When attempting to update the software and this error message appears, please try restarting your IDE and logging out and then back in to your account in the IDE.
How do I create outlets or actions with Interface Builder?
With the Introduction of the Xamarin Designer for iOS in Xamarin Studio and Visual studio, Xamarin.iOS developers can now take advantage of creating a UI through Storyboards and .xibs. Refer to the Hello, iOS guides for more information on using the designer.
You can also refer to Apple's Outlet and Actions guides for more information on using Outlets and Actions in IB.
System.Text.Encoding.GetEncoding throws NotSupportedException
You may be using an encoding that is not added by default. Check the Internationalization page to learn how to add support for more encoding.
System.MissingMethodException (anything else)
The member was likely removed by the linker, and thus doesn't exist in the assembly at runtime. There are several solutions to this:
- Add the [Preserve] attribute to the member. This will prevent the linker from removing it.
- When invoking mtouch , use the -nolink or -linksdkonly options. - -nolink disables all linking.
- -linksdkonly will only link Xamarin.iOS-provided assemblies, such as monotouch.dll or xamarin.ios.dll .
Note that assemblies are linked so that the resulting executable is smaller; thus, disabling linking may result in a larger executable than is desirable.
You are getting a ModelNotImplementedException
If you are getting this exception this means that you are calling base.Method () on a class that overrides a Model. You do not need to call the base method in a class for models (these are classes that are flagged with the [Model] attribute).
This class is not key value coding-compliant for the key XXXX
If you get this error when loading a NIB file it means that the value XXXX was not found on your managed class. This means that you are missing a declaration like this:
[Connect] TypeName XXXX { get { return (TypeName) GetNativeField ("XXXX"); } set { SetNativeField ("XXXX", value); } }
The above definition is automatically generated by Xamarin Studio for any XIB
files that you add to Xamarin Studio in the
NAME_OF_YOUR_XIB_FILE.designer.xib.cs file.
Additionally, the types containing the above code must be a subclass of NSObject. If the containing type is within a namespace, it should also have a [Register] attribute which provides a type name without a namespace (as Interface Builder doesn't support namespaces in types):
namespace Samples.GLPaint { // The [Register] attribute overrides the type name registered // with the Objective-C runtime, in this case removing the namespace. [Register ("AppDelegate")] public class AppDelegate {/* ... */} }
Unknown class XXXX in Interface Builder file
This error is generated if you define a class in your interface builder files but you do not provide the actual implementation for it in your C# code.
You need to add some code like this:
public partial class MyImageView : UIView { public MyImageView (IntPtr handle) : base (handle {} }
System.MissingMethodException: No constructor found for Foo.Bar::ctor(System.IntPtr)
This error is produced at runtime when the code tries to instantiate an instance of the classes that you referenced from your Interface Builder file. This means that you forgot to add a constructor that takes a single IntPtr as a parameter.
The constructor with an IntPtr handle is used to bind managed objects with their unmanaged representations.
To fix this, add the following line of code to the class Foo.Bar:
public Bar (IntPtr handle) : base (handle) { }
Type {Foo} does not contain a definition for
GetNativeField' and no extension methodGetNativeField' of type {Foo} could be found
If you get this error in the designer generated files (*.xib.designer.cs), it means one of two things:
1) Missing partial class or base class
The designer-generated partial classes must have corresponding partial
classes in user code that inherit from some subclass of
NSObject,
often
UIViewController. Ensure that you have such a class for the
type that is giving the error.
2) Default namespaces changed
The designer files are generated using your project's default namespace settings. If you have changed these settings, or renamed the project, the generated partial classes may no longer be in the same namespace as their user-code counterparts.
Namespace settings can be found in the Project Options dialog. The default namespace is found in the General->Main Settings section. If it is blank, the name of your project is used as the default. More advanced namespace settings can be found in the Source Code->.NET Naming Policies section.
Warning for actions: The private method 'Foo' is never used. (CS0169)
Actions for interface builder files are connected to the widgets by reflection at runtime, so this warning is expected.
You can use "#pragma warning disable 0169" "#pragma warning enable 0169" around your actions if you want to suppress this warning just for these methods, or add 0169 to the "Ignore warnings" field in compiler options if you want to disable it for your whole project (not recommended).
mtouch failed with the following message: Cannot open assembly '/path/to/yourproject.exe'
If you see this error message, generally the problem is the absolute path to your project contains a space. This will be fixed in a future version of Xamarin.iOS, but you can work around the issue by moving the project to a folder without spaces.
Your sqlite3 version is old - please upgrade to at least v3.5.0!
This happens when you do all of the following:
- Use Mono.Data.Sqlite
- Use Mac OS X Leopard (10.5)
- Run your app within the simulator.
The problem is that Mono is picking up the OS X
libsqlite3.dylib, not the iPhoneSimulator's
libsqlite3.dylib file. Your app will work on the
device, but just not your simulator.
The short term fix is to use Mac OS X Snow Leopard (10.6). A fix for Leopard (10.5) will be released with a future Xamarin.iOS version.
Deploy to device fails with System.Exception: AMDeviceInstallApplication returned 3892346901
This error means that the code-signing configuration for your certificate/bundle id does not match the provisioning profile installed on your device. Confirm you have the appropriate certificate selected in Project Options->iPhone Bundle Signing, and the correct bundle id specified in Project Options->iPhone Application
Code Completion is not working in Xamarin Studio
Ensure that you are using the latest version of Xamarin Studio and Xamarin.iOS
If the issue is still present, please file a bug, attaching the ~/Library/Logs/XamarinStudio-{VERSION}/Ide-{TIMESTAMP}.log, AndroidTools-{TIMESTAMP}.log, and Components-{TIMESTAMP}.log log files.
If all else fails, you can try removing the code completion cache so that it is regenerated:
[rm -r ~/.config/XamarinStudio-{VERSION}/CodeCompletionData]
Be careful that you type this command correctly or you could accidentally remove important files.
Xamarin Studio crashes when you copy text
The popular Mac utilities QuickSilver, Google Toolbar and LaunchBar have clipboard features that corrupt Xamarin Studio's memory. In their options, you can list Xamarin Studio as a process they should not interfere with.
Xamarin Studio complains about Mono 2.4 required
If you updated Xamarin Studio due to a recent update, and when you try to start it again it complains about Mono 2.4 not being present, all you have to do is upgrade your Mono 2.4 installation.
Mono 2.4.2.3_6 fixes some important problems that prevented Xamarin Studio from running reliably, sometimes hung Xamarin Studio at startup or prevented the code completion database from being generated.
Once you install the new Mono, Xamarin Studio will start as expected.
Assertion at ../../../../mono/metadata/generic-sharing.c:704, condition `oti' not met
If you are receiving the following stack trace:
* Assertion at ../../../../mono/metadata/generic-sharing.c:704, condition `oti' not met Stacktrace: at System.Collections.Generic.List`1<object>..cctor () <0xffffffff> at System.Collections.Generic.List`1<object>..cctor () <0x0001c> at (wrapper runtime-invoke) object.runtime_invoke_dynamic (intptr,intptr,intptr,intptr) <0xffffffff></code>
It means that you are linking a static library compiled with thumb code into your project. As of iPhone SDK release 3.1 (or higher at the time of this writing) Apple introduced a bug in their linker when linking non-Thumb code (Xamarin.iOS) with Thumb code (your static library). You will need to link with a non-Thumb version of your static library to mitigate this issue.
System.ExecutionEngineException: Attempting to JIT compile method (wrapper managed-to-managed) Foo[]:System.Collections.Generic.ICollection`1.get_Count ()
The [] suffix indicates that you or the class library are calling a method on an array through a generic collection, such as IEnumerable<>, ICollection<> or IList<>. As a workaround, you can explicitely force the AOT compiler to include such method by calling the method yourself, and by making sure that this code is executed before the call that triggered the exception. In this case, you could write:
Foo [] array = null; int count = ((ICollection<Foo>) array).Count;
Which will force the AOT compiler to include the get_Count method.
Xamarin Studio source editor is extremely slow
Sometimes the Xamarin Studio source editor becomes extremely slow, appearing to hang for several seconds between typing characters.
This issue is very rare and extremely hard to reproduce - it usually cannot be reproduced on the same machine after restarting Xamarin Studio. For this reason we would appreciate it if you could perform several debugging steps before restarting Xamarin Studio, and send the results to us.
- Try closing the editor tab, and re-opening it. Does it take a little bit of editing or moving the caret around until the slowdown happens again?
- Disable "Beam Sync" using the "Quartz Debug" developer tool (which you can find using Spotlight), and check whether the source editor performance is restored to normal.
- Try repeating step (1) with Beam Sync still disabled.
- If the editor hangs for more than a few seconds, try to run "killall -QUIT [XAMARIN STUDIO]" in a terminal while it is hung. It may be difficult to time the kill command to happen while the editor is hung, but it's essential to do so, because the command forces Mono to write stack traces of all threads to the MD log, which we can use to discover what state the threads are in while the XS is hung.
Please attach the XS logs, ~/Library/Logs/XamarinStudio-{VERSION}/Ide-{TIMESTAMP}.log, AndroidTools-{TIMESTAMP}.log, and Components-{TIMESTAMP}.log (in older versions of XS/MonoDevelop, just send ~/Library/Logs/MonoDevelop-(3.0|2.8|2.6)/MonoDevelop.log).
NOTE: The above issue was fixed in XS 2.2 Final
Compiled application is very large
In order to support debugging, debug builds contain additional code. Projects built in release mode are a fraction of the size.
As of Xamarin.iOS 1.3 the debug builds included debugging support for every single component of Mono (every method in every class of the frameworks).
With Xamarin.iOS 1.4 we will introduce a finer grained method for debugging, the default will be to only provide debugging instrumentation for your code and your libraries, and not do this for all of the Mono assemblies (this will still be possible, but you will have to opt-in to debugging those assemblies).
Installation Hangs
Both Mono and Xamarin.iOS installers hang if you have the iPhone Simulator running. This problem is not limited to Mono or Xamarin.iOS, this is a consistent problem across any software that tries to install software on MacOS Snow Leopard if the iPhone Simulator is running at installation time.
Make sure you quit the iPhone simulator and retry the installation.
Ran out of trampolines of type 0
If you get this message while running device, You can create more type 0 trampolines (type SPECIFIC) by modifying your project options "iPhone Build" section. You want to add extra arguments for the Device build targets:
-aot "ntrampolines=2048"
The default number of trampolines is 1024. Try increasing this number until you have enough for your application.
Ran out of trampolines of type 1
If you make heavy use of recursive generics, you may get this message on device. You can create more type 1 trampolines (type RGCTX).
Ran out of trampolines of type 2
If you make heavy use interfaces, you may get this message on device. You can create more type 2 trampolines (type IMT Thunks) by modifying your project options "iPhone Build" section. You want to add extra arguments for the Device build targets:
-aot "nimt-trampolines=512"
The default number of IMT Thunk trampolines is 128. Try increasing this number until you have enough for your usage of interfaces.
Debugger is unable to connect with the device
When you start debugging a device configuration, you will see the debugger show a dialog indicating that it is trying to connect to the application. There are several reasons the debugger may not be able to connect to the application, depending on the mode you're using to connect (USB or WiFi).
If the device and the debugger host are on different networks, a firewall or private network may be preventing the application from connecting to the debugger host in WiFi mode.
Xamarin Studio may not be able to query the correct IP of the host. In WiFi mode Xamarin Studio gives the application all the IPs it can find of the host, and the application tries them all to see if it can use any of them to connect to Xamarin Studio.
Another device is connected to a USB port on the host. In a few cases other devices connected to the USB ports on the host have been known to somehow interfere with debugging in USB mode.
If either WiFi or USB mode does not work, you can easily try the other: in Xamarin Studio, open the Preferences, go to the Preferences/Debugger/iPhone Debugger page, and toggle the "Debug iOS devices over WiFi instead of over USB" checkbox. If neither works, you can see more information about the failure in the device console in verbose mode (which is enabled by adding "-v -v -v" to the additional mtouch arguments in the project's options).
Error 134: mtouch failed with the following message:
This error could be raised if you are trying to build with -nolink on the Xamarin.iOS 1.4 style of releases. You can work around this error by specifying Extra Arguments in your monodevelop project configuration.
Add the argument
-nosymbolstrip
and the problem should be resolved.
Distribution identity is not shown in Xamarin Studio project signing options
Xamarin Studio 2.2 has a bug that causes it not to detect distribution certificates that contain a comma. Please update to Xamarin Studio 2.2.1.
Error "AFCFileRefWrite returned: 1" during upload
While uploading an app to your device you may receive an Error "AFCFileRefWrite returned: 1". This can happen if you have a zero-length file.
Error "mtouch failed with no output"
The current release of Xamarin.iOS and Xamarin Studio fail when the project name or the directory where the solution or project are stored contain spaces. To fix this:
- Make sure that neither your project or the directory where it is stored contains a space.
In your project "Main Settings" make sure that the Project Name does not contain any spaces.
Error "The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the application"
This error is usually caused with a project that was started in iPad development before Xamarin.iOS 2.0.0 was released, you likely have some keys in your Info.plist like:
<key>UIDeviceFamily</key> <array> <string>1</string> </array>
This keypair should be removed as Xamarin Studio handles it for you automatically.
Error "A pre-release beta version of the SDK was used to build the app"
(Contributed by Ed Anuff)
Follow these steps:
- Change the SDK version in iPhone Build to 3.2 or iTunes connect will reject it on upload because it is seeing an iPad compatible app built using an SDK version less than 3.2
- Create a custom Info.plist for the project and explicitly set MinimumOSVersion to 3.0 in it. This will override the MinimumOSVersion 3.2 value set by Xamarin.iOS. If you do not do this, the app will not be able to run on an iPhone.
Rebuild, zip and upload to iTunes connect.
Unhandled Exception: System.Exception: Failed to find selector someSelector: on {type}
This exception is caused by one of three things:
- You have provided a Selector to the Objective-C runtime without applying the corresponding [Export] attribute to a method
- You have enabled full linking and not applied the [Preverse] attribute to the [Export]ed method.
You have applied the [Export] attribute to a private method in an inherited type.
MainWindow.xib.designer.cs file is not updated
There was a bug in Xamarin Studio 2.4 that caused it not to group the MainWindow.xib file with the MainWindow.xib.designer file in new projects. This meant it would not update the designer code for that particular file.
This issue is fixed in the version of Xamarin Studio that's available in its built-in updater, so please ensure you use the newer version.
You can fix existing projects by removing (not deleting) the xib and its designer file, then adding it back. This should re-group the files correctly.
UIAlertView or UIActionSheet vanish after being created
If you have some code like this:
var actionSheet = new UIActionSheet ("My ActionSheet", null, null, "OK", null){ Style = UIActionSheetStyle.Default }; actionSheet.Clicked += delegate (sender, args){ Console.WriteLine ("Clicked on item {0}", args.ButtonIndex); };
the "actionSheet" object lives as a temporary variable in the function and as soon as the function terminates, the object is eligible for garbage collection, so it ends up being garbage collected.
To fix this problem, you need to keep a reference to "actionSheet" outside your method, somewhere that will live beyond your method.
Project Always Runs in the iPad Simulator
The iPhone SDK 4.0 installer installs 2 SDKs - the 3.2 SDK, for building iPad-only apps, and the 4.0 SDK, used for bulding iPhone and Universal apps. It also installs a 3.2 simulator, which simulates only an iPad, and a 4.0 simulator that simulates iPhone or iPhone 4. All older SDKs and simulators are removed.
Xamarin Studio iPhone project build options include a setting for the SDK version that will be used in building your app. This setting can be found in Project Options->Build->iPhone Build.
New projects in Xamarin Studio use the oldest installed SDK as their default SDK setting, and if the SDK specified does not exist, Xamarin Studio will use the closest it can find to build your app. This was done so that projects would not always requre the newest SDK. However, this currently results in the 3.2 SDK being used - which results in the iPad simulator being used.
To fix this by using the 4.0 SDK, go to Project Options->Build->iPhone Build> and change the SDK value to "4.0" using the dropdown box. You must do this for each configuration and platform combination, accessed using the dropdowns at the top of the panel.
The SDK version should not be confused with the "Minimum OS version" setting. This value does not have to match the SDK version value - it affects the minimum version of the OS your app will install on, which can be older than the SDK, as long as you use only APIs that exist in the older OS, or guard use of newer features using runtime OS version checks. You should set it to the oldest OS version on which you test your app.
Note also that the Project->iPhone Simulator Target> menu can be used to pick the simulator that is used by default when running/debugging a project. Additionally, the Run->Run With> menu can be used to pick a specific simulator with which to run
ibtool returns error 133
This means that you have XCode 4 installed. In XCode 4, the tool ibtool was removed, it is no longer possible to edit your XIB files with a standalone tool.
If you want to use Interface Builder, install XCode series 3, available from Apple's web site.
"Can't create display binding for mime type: application/vnd.apple-<wbr/>interface-builder"
This error happens if you try to create an iPhone UI from a non-iPhone project. Make sure that you start with an iPhone/iPad solution, it is not possible to just add iPhone UI elements to a non-iPhone/iPad project.
Startup crash when executing inside the iOS simulator
If you get a runtime crash (SIGSEGV) inside the simulator along with a stack trace that looks like this:
<code> at (wrapper managed-to-native) System.Reflection.Assembly.GetTypes (System.Reflection.Assembly,bool) at MonoTouch.ObjCRuntime.Runtime.RegisterAssembly (System.Reflection.Assembly) at (wrapper runtime-invoke) <Module>.runtime_invoke_void_object (object,intptr,intptr,intptr)</code>
...then you probably have one (or more) stale assembly in your simulator application directory. Such assemblies may exists since Apple iOS simulator adds and updates files but never deletes them. If this happens then the easiest solution is to select "Reset and Content and Settings..." from the simulator menu. Warning:
this will remove all files, applications and data from the simulator. Next time you execute your application, Xamarin Studio will deploy it into the simulator and there will be no old, stale assembly to cause the crash.
Simulator hangs during application installation
This can happen when application names include a '.' (dot) in their name. This is forbidden as the executable name in CFBundleExecutable - even if it can works in many other cases (like devices).
"The value should not include any extension on the name."
Error: "Custom attribute type 0x43 is not supported" when double clicking .xib files
This is caused by attempting to open .xib files when environment variables are set incorrectly. This should not happen with normal usage of Xamarin Studio/Xamarin.iOS, and re-opening Xamarin Studio from /Applications should fix the problem.
When attempting to update the software and this error message appears, please e-mail support@xamarin.com
Application runs on simulator but fails on device
This issue can manifest in several forms, and doesn't always produce a consistent error. If the application contains a .xib, check to make sure the Build Action on the .xib is set to InterfaceDefinition. This is the default build action for .xibs.
To check the Build Action, right click on the .xib file and choose Build Action:
System.NotSupportedException: No data is available for encoding 437
When including 3rd party libraries in your Xamarin.iOS.iOS project, going to iOS Build > Internationalization and checking the West international.
|
https://docs.mono-android.net/guides/ios/troubleshooting/troubleshooting/
|
CC-MAIN-2017-13
|
refinedweb
| 3,968
| 55.64
|
Created on 2014-01-09 14:21 by timar, last changed 2014-01-13 18:34 by r.david.murray. This issue is now closed.
Try this sample script:
# coding=utf-8
import email
import email.charset
import email.message
c = email.charset.Charset('utf-8')
c.body_encoding = email.charset.QP
m = email.message.Message()
m.set_payload("This is a Greek letter upsilon: υ", c)
print(m.as_string())
Actual result: "This is a Greek letter upsilon: =CF"
Expected result: "This is a Greek letter upsilon: =CF=85"
hope it will fix that issue
This is a bug in quoprimime.body_encode. If you put a newline on the end of your string, it will work as expected.
The patch in issue 5803 does not have the bug, so I'll probably just apply that to both 3.3 and 3.4.
New changeset 4c5b1932354b by R David Murray in branch '3.3':
#20206, #5803: more efficient algorithm that doesn't truncate output.
New changeset b6c3fc21286f by R David Murray in branch 'default':
Merge #20206, #5803: more efficient algorithm that doesn't truncate output.
Fixed.
|
https://bugs.python.org/issue20206
|
CC-MAIN-2019-22
|
refinedweb
| 183
| 77.53
|
Seperate plot rendering implementation from plotting data generation
Make it easier to replace/plug plotting libraries. The currently used wx.lib.plot doesn't produce visually appealing plots (at least the latest trunk version has antialiasing), we should make it easier to plug different rendering libraries. I'm adapting the cairoplot library rendering backend, the graphs it produces look quite attractive.
Blueprint information
- Status:
- Complete
- Approver:
- None
- Priority:
- Medium
- Drafter:
- None
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- None
- Implementation:
Implemented
- Started by
- Michael Rooney on 2009-10-16
- Completed by
- Michael Rooney on 2009-12-29
Related branches
Related bugs
Sprints
Whiteboard
2009.12.29 kolmis:
confirming correct fallback-to-wx behavior on windows.
2009.12.29 mrooney:
Sorry, I meant cairo there and not chaco. I want to make sure that if a Windows user has numpy installed but not GTK, that it falls back to the wx plot instead of dying on cairo dependencies.
2009.12.29 kolmis:
I'm not exactly sure what you mean by the chaco ImportError raising. Running on windows, both 'cairo' and 'wx' suggest the 'python-numpy' package installed for plorrint features.
Shouldn't BasePlotImportE
2009.12.29 mrooney:
I removed the chaco support for now, and after testing the fallbacks (except on Windows, I'd like to test that it properly raises an ImportError there for chaco and falls back. If you get a chance to test on Windows before I do, feel free to let me know), I'm calling this Implemented! Thanks again for your excellent contribution.
2009.12.19 kolmis:
yep, the bundled version is the patched one. the tests and test scripts can be deleted.
2009.12.18 mrooney:
Okay, bundling it makes sense then and sounds easy. It is only ~115K or so I see, and the refactoring allowed us to drop bundling plot.py which was 85K, so it more or less paid for itself. Is the bundled cairoplot your sped up version? I think the only left to do on this blueprint is get the chaco plot working as well as the others (or just drop it, it may not be valuable to support three different plotting backends, cairoplot seems pretty sufficient). Am I missing anything? Thanks again for your work, the new cairoplot is much better!
2009.12.18 kolmis:
no, cairplot doesn't have a package, moreover its code is very slow, showing a plot in wxbanker took a long time, i had done some tweaks, and proposed a merge
https:/
but without a response, i'd bundle it with wxbanker for start...
2009.12.18 mrooney: I thought it might be fun to play around with this tonight, so I merged your branch into trunk and pushed it up to ~mrooney/
EDIT: Okay, I actually pushed this up to trunk after some modifications. It now will choose cairo by default if it exists, and can fall all the way back to no summary tab. I added the trend line, and make the y labels formatted as currency. Now I see that I guess cairo plot doesn't have a debian package, so it just has to be included? Also, since I factored out the polyfit logic into the baseplot class, wxBanker doesn't need to carry its own plot.py file, so I was able to drop that which is very nice.
2009.10.17 kolmis: strange i didn't stumble upon chaco when comparing python plotting libraries, it looks cool, though it does not seem lightweight. i added a basic chaco support to my cairoplot branch, try wxbanker.py --debug to have a look. the python-chaco lib in karmic is kind of broken, a quick workaround fix is to patch /usr/lib/
#def move_to(*args): return _agg.GraphicsCo
def move_to(*args): return _agg.GraphicsCo
2009.10.16 mrooney: Excellent, that sounds like a great idea! I had also looked at Chaco (http://
|
https://blueprints.launchpad.net/wxbanker/+spec/replaceable-plot-renderers
|
CC-MAIN-2017-30
|
refinedweb
| 655
| 71.75
|
In the previous post, we learned how to classify arbitrarily sized images and visualized the response map of the network.
In Figure 1, notice that the head of the camel is almost not highlighted, and the response map contains a lot of the sand texture instead. The bounding box is also significantly off.
Something is not right.
The ResNet18 network we used is very accurate, and in fact it classifies the image correctly. Looking at the bounding box you may be tempted to think that we just got lucky and the classification was correct even when the network did not choose the best information from the image.
But is it really so? The short answer is NO. In the previous post, we used a quick and dirty approach to find the area of interest.
In this post, we will do it the right way and understand the concept of receptive fields in a neural network along the way.
Neural Network Receptive Field
Recall how we found the area of interest and bounding box around the camel in the previous post. We upsampled the response map for the predicted class to fit the original image.
That approach provided some insight, but it was strictly not the right way.
To understand how to do it the right way, we need to understand a concept called the receptive field. For a pixel in a feature map inside the network, the receptive field represents all the pixels from the previous feature maps that affected its value.
The receptive field is the proper tool to understand what the network “saw” and analyzed to predict the “camel” class, whereas the scaled response map we saw in the previous post is only a rough approximation of it.
Let’s pick a toy example.
In Figure 2, we are showing the input image followed by the outputs of two layers of a Convolutional Neural Network (CNN). Let’s call the output after the first layer
FEATURE_MAP_1, and the output after the second layer
FEATURE_MAP_2.
Let’s suppose that the layers 1 and 2 are convolutional with kernel size 3. So, the feature map after a particular layer is affected by a 3×3 region ( i.e. 9 values ) in the previous feature map.
We want to find the receptive field of the dark blue pixel of
FEATURE_MAP_2.
The value of this pixel is affected by the 9 corresponding values from the
FEATURE_MAP_1 marked in blue. In turn, these 9 values are affected by the corresponding pixels from the input image.
In other words, a pixel in
FEATURE_MAP_2 is affected by a 5×5 patch (marked in light blue) in the input image. These are the pixels that the dark blue one can “see” on the input image.
Note that in the input image, pixels have different shades of blue. These shades represent the number of time the corresponding pixels participated in the convolutions that affected the dark blue pixel of interest. The outer pixels were used in the computations only once. The center pixel participated in every convolution and we did 9 of them to compute
FEATURE_MAP_1.
This toy example gives us an idea on how to compute the receptive field of a more complex network. By doing this, we can understand which pixels of the input image could affect the results of the network! And thus we can have a much deeper understanding of the results of the network.
Knowing the receptive field size is very useful for neural network debugging as it gives you an insight into how the net makes its decisions.
What does the Receptive Field Size Depend on?
The receptive field size of the output pixels is typically pretty large – it’s typically hundreds of pixels wide. This value depends on the depth of the network, the size of the convolutions in it, the stride, and padding used in the convolution filters. The deeper the network, the more context every pixel can “see” on the input image.
Importantly, the receptive field does not depend on the size of the input image. Even though fully convolutional nets can accept and process images of any size, their receptive field stays the same – as their depth remains constant. Sometimes this means that nets can perform badly if the objects in the input image are too large – they just won’t see enough context to make the decision!
The receptive field size also does not depend on the values of the weights in the network. In fact, it would be the same for trained and untrained networks of the same architecture.
We’ll use this fact to compute the size of the receptive field.
Download Code
Before we go over the explanation, you can download code from our GitHub repo:
Receptive Field Computation for Max Activated Pixel
Let’s discuss how we can visualize the receptive field of a pixel.
There are two main ways
- Run a backpropagation for this pixel.
- Compute the receptive field size analytically.
In this post, we’ll discuss the first way, and will cover the latter in a future post.
Let’s again infer the network on our image and get the final activation map (we’ll call it “score map” here).
# Load modified resnet18 model with pretrained ImageNet weights model = FullyConvolutionalResnet18(pretrained=True).eval() # Perform the inference. # Instead of a 1x1000 vector, we will get a # 1x1000xnxm output ( i.e. a probability map # of size n x m for each 1000 class, # where n and m depend on the size of the image.) preds = model(image) preds = torch.softmax(preds, dim=1) # Find the class with the maximum score in the n x m output map pred, class_idx = torch.max(preds, dim=1) row_max, row_idx = torch.max(pred, dim=1) col_max, col_idx = torch.max(row_max, dim=1) predicted_class = class_idx[0, row_idx[0, col_idx], col_idx] # Find the n x m score map for the predicted class score_map = preds[0, predicted_class, :, :].cpu() print('Score Map shape : ', score_map.shape)
Our score map has 1 channel because we already extracted only the channel corresponding to the predicted class out of 1000 initial channels. It has 3 rows and 8 columns.
Now let’s first find the pixel in the network result with the highest value for the “camel” class. This is the pixel that got activated the most – let’s see, what parts of the image could it “see”.
scoremap_max_row_values, max_row_id = torch.max(scoremap, dim=1) _, max_col_id = torch.max(scoremap_max_row_values, dim=1) max_row_id = max_row_id[0, max_col_id] print('Coords of the max activation:', max_row_id.item(), max_col_id.item())
In our image, the pixel with the highest activation is located in the 1st row and the 6th column.
Use Backprop to Compute the Receptive Field
To compute the receptive field size using backpropagation, we’ll exploit the fact that the values of the weights of the network are not relevant for computing the receptive field.
Let’s go over the steps
1. Load Model
First we load the model and put in in train mode. This ensures we will be able to pass the gradients.
# Initialize the model model = FullyConvolutionalResnet18() # model should be in the train mode to be able to pass the gradient model = model.train()
2. Set Layer Parameters
As mentioned earlier, the receptive field does not depend on the weights and biases. We will exploit this fact to compute the receptive field.
As we know, convolutional layers have two parameters — weight and bias. We’ll change the weight every layer to be 0.05 and the bias to be 0.
The BatchNorm layer has four parameters — weight, bias, running_mean, and running_var. We set the weight to 0.05, bias to 0, running_mean to 0, and running_var to 1.
Here’s the code.
for module in model.modules(): # skip errors on container modules, like nn.Sequential try: # Make all convolution weights equal. # Set all biases to zero. nn.init.constant_(module.weight, 0.05) nn.init.zeros_(module.bias) # Set BatchNorm means to zeros, # variances - to 1. nn.init.zeros_(module.running_mean) nn.init.ones_(module.running_var) except: pass
3. Freeze BatchNorm Layers
In the BatchNorm layer, two out of these four parameters are learnable (weight and bias), and the other two are statistics that are calculated during the forward pass. So they change the value of the input tensor, but they are not updated with the backpropagation. So even though we’ve initialized these parameters in the code above, they will be updated during the forward pass and this will adversely affect the visualization we want.
So, we should switch them to the eval mode. This way, the parameters will not be updated during a forward pass.
# Freeze the BatchNorm stats. if isinstance(module, torch.nn.modules.BatchNorm2d): module.eval()
4. Input a white image
We want to create a situation in which the gradient at the output of the model depends only on the location of the pixels. So, we pass a white image into the network.
input = torch.ones_like(image, requires_grad=True) out = model(input)
An important thing here is that we want to propagate the gradient to the image to see which pixels affected the final result. So, unlike the ordinary training, we’ve marked the image as differentiable for the PyTorch Autograd using by setting
requires_grad to
True. This way it won’t only compute the gradients for the weights of the network, but also for the image itself.
5. Tweak output gradients and backpropagate
Next we will tweak the output gradient that will be backpropagated through the network. We only want to compute the receptive field of the most activated pixel – so we’ll set the corresponding gradient value to 1 and all the others to 0.
When we backpropagate this gradient all the way to the input layer, the receptive field will light up and everything else will be dark.
Now let’s infer this synthetic image through our synthetic network.
# Set the gradient to 0. # Only set the pixel of interest to 1. grad = torch.zeros_like(out, requires_grad=True) grad[0, 0, max_row_id, max_col_id] = 1 # Run the backprop. out.backward(gradient=grad) # Retrieve the gradient of the input image. gradient_of_input = input.grad[0, 0].data.numpy() # Normalize the gradient. gradient_of_input = gradient_of_input / np.amax(gradient_of_input)
6. Visualize Results
The final step is to simply normalize the backpropagated gradient at the input layer. The normalization simply involves subtracting the minimum value and then dividing by the maximum value so the normalized image is between 0 and 1.
This normalized image is used as a mask and multiplied with the original image.
def normalize(activations): # transform activations so that all the values be in range [0, 1] activations = activations - np.min(activations[:]) activations = activations / np.max(activations[:]) return activations def visualize_activations(image, activations): activations = normalize(activations) # replicate the activations to go from 1 channel to 3 # as we have colorful input image # we could use cvtColor with GRAY2BGR flag here, but it is not # safe - our values are floats, but cvtColor expects 8-bit or # 16-bit inttegers activations = np.stack([activations, activations, activations], axis=2) masked_image = (image * activations).astype(np.uint8) return masked image receptive_field_mask = visualize_activations(image, gradient_of_input) cv2.imshow("receiptive_field_max_activation", receptive_field_mask) cv2.waitKey(0)
The receptive field clearly shows that the network does pays the most attention to the head of the camel! That’s good news – this means the network is smarter than we saw before.
What do we see the grids?
A peculiar detail of this receptive field is its grid structure. This structure is explained by the architecture of the first layers of the ResNet. The first block runs a 7×7 convolution on the input data and then quickly downsamples it to decrease the computations. This means that we only look once at the high-quality image and then look many more times to progressively downsampled one. In terms of the receptive field, this means that the regions that only participate in the first convolution and then get cut by the downsampling operation only affect the result in a subtle way – and thus are represented as the dark grid lines here.
Receptive Field for the Net Prediction
Let’s go further and analyze not the most activated pixel, but the whole network feature map for the class “camel”. In fact, we can backpropagate it the same way we did for a single pixel – we just need to put the whole tensor to the output gradient. This way we’ll understand which pixels from the input image resulted in the whole final score map for the camel class.
out = model(input) grad = torch.zeros_like(out, requires_grad=True) grad[0, predicted_class] = scoremap out.backward(gradient=grad) gradient_of_input = input.grad[0, 0].data.numpy() gradient_of_input = gradient_of_input / np.amax(gradient_of_input)
The resulting image shows which areas of the input image affected the prediction of the network:
def find_rect(activations): # Dilate and erode the activations to remove grid-like artifacts kernel = np.ones((5, 5), np.uint8) activations = cv2.dilate(activations, kernel=kernel) activations = cv2.erode(activations, kernel=kernel) # Binarize the activations _, activations = cv2.threshold(activations, 0.25, 1, type=cv2.THRESH_BINARY) activations = activations.astype(np.uint8).copy() # Find the countour of the binary blob contours, _ = cv2.findContours(activations, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE) # Find bounding box around the object. rect = cv2.boundingRect(contours[0]) return Rect(rect[0], rect[1], rect[0] + rect[2], rect[1] + rect[3]) rect = find_rect(gradient_of_input) receptive_field_mask = visualize_activations(image, gradient_of_input) cv2.rectangle(receptive_field_mask, (rect.x1, rect.y1), (rect.x2, rect.y2), color=(0, 0, 255), thickness=2) cv2.imshow("receiptive_field", receptive_field_mask) cv2.waitKey(0)
Please note that having nonzero values somewhere in this feature map does not mean that the network predicts camel class for that position – activations for the other classes may be much higher.
Let’s also compare this image to the scaled score map that we used as a rough approximation of the receptive field before:
Now we see good news here. First, the bounding box here is tighter wrt the camel – so our network is actually even better an object detector than we thought before. Second, the areas that affected the correct prediction seem even more relevant in the receptive field visualization than in the approximated score map one.
|
https://learnopencv.com/cnn-receptive-field-computation-using-backprop/
|
CC-MAIN-2021-17
|
refinedweb
| 2,373
| 56.66
|
VueJS is raising as one of the most popular front end framework, compared with React (supported by Facebook) and Angular (from Google). Recently, it has been updated to version 3 with many new exciting features. In this post, we will explore the combination with VueX (state management) to handle 3rd party API. To make it simple for the learning purpose, our goal is just to receive the top articles from Hacker News and load it from the client side.
You can try the online demo here
First of all we use Vite to scaffold the project. You may wonder why I don’t use the official Vue CLI tool. The reason is Vite is really fast, and in this case I just want to make a quick demonstration. Vue CLI, in other hand, is built on top of the powerful and popular Webpack , will bring you an amazing plugin ecosystem (and it’s compatible with Vue 2). So, now we use yarn (you can use npm instead, just a personal favor, although I prefer the speed of yarn) to create our new web app (requires Node.js version >=12.0.0.)
yarn create @vitejs/app
After enter the command, you will be prompted to choose some selections. Then we cd to your working directory and run following commands to install some tools: VueX (version 4.x), eslint as well as its plugin for Vue and axios.
yarn yarn add axios vuex@next --save yarn add -D eslint eslint-plugin-vue yarn eslint --init yarn dev
Now, you can open the browser and go to the address to see if the dev server is running.
For the interface, I gonna use Tailwind, and “Vue 3 and Vite don’t support PostCSS 8 yet so you need to install the Tailwind CSS v2.0 PostCSS 7”.
yarn add -D tailwindcss@npm:@tailwindcss/postcss7-compat @tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9
Next, to generate the tailwind.config.js and postcss.config.js files, run:
npx tailwindcss init -p
From the official guide: “In your tailwind.config.js file, configure the purge option with the paths to all of your pages and components so Tailwind can tree-shake unused styles in production builds.”
module.exports = { purge: ['./index.html', './src/**/*.{vue,js,ts,jsx,tsx}'], darkMode: false, // or 'media' or 'class' theme: { extend: {}, }, variants: { extend: {}, }, plugins: [], }
Then create a new file
main.css in
src/assets/css:
/* ./src/assets/css/main.css */ /*! @import */ @tailwind base; @tailwind components; @tailwind utilities;
Then, we need to fetch the data from HackerNews to VueX store first. In the snippet below, I also set up the axios instance, so that we can re-use it later. The API from HackerNews to get top stories only return the IDs, so that we need to fetch each individual item after receiving the arrays.
Next, we create a new component at
components/Stories.vue as below:
Then add VueX to the main.js
import { createApp } from "vue"; import App from "./App.vue"; import store from "./store"; import "./assets/css/main.css"; const app = createApp(App); app.use(store); app.mount("#app");
Finally, we edit
App.vue
Open the and voilà.
![Top stories from Hacker News()
Hmm, I forgot the time, we need to make it more readable, instead of a string of numbers. I gonna use the
timeago.jspackage to manipulate.
yarn add timeago.js
Then, we add a new method in
components/Stories.vue:
methods: { parseTime(t) { return timeago.format(t * 1000); } },
and implement it in template section:
<div class="text-sm text-gray-500">{{ parseTime(item.time) }}</div>
Reload the page to check the result
The final source code is on Github repo.
In the next article, we will implement advanced features of Vue components to render them dynamically. I would appreciate to receive any feedback from you guys
Resources:
Vite.JS
Vuex@Next
Official Hacker News API
Tailwind CSS
Discussion (2)
is there any way to use jsx with this?
Yes, I think it's possible since Vue 3 has supported jsx pretty well
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/infantiablue/hackernews-reader-with-vue-3-vite-2-vuex-4-tailwind-part-1-1ilg
|
CC-MAIN-2021-17
|
refinedweb
| 675
| 66.03
|
"Phillip J. Eby" <pje at telecommunity.com> wrote: > At 12:42 PM 9/22/2006 -0700, Josiah Carlson wrote: [snip] > Measure it. Be sure to include the time to import SQLite vs. the time to > import the zipimport module. [snip] > Again, seriously, compare this against a zipfile. You'll find that there's > absolutely no comparison between reading this and reading a zipfile central > directory -- which also results in an in-memory cache that can then be used > to seek() directly to the module. They are not directly comparable. The registry of packages can do more than zipimport in terms of package naming and hierarchy, but it's not an importer; it's a conceptual replacement of sys.path. I have already stated that the actual imports from this registry won't be any faster, as it will still need to read modules/packages from disk *after* it has decided on a list of paths to check for the package/module. Further, whether we use SQLite, or any one of a number of other persistance mechanisms, such a choice should depend on a few things (speed being one of them, though maybe not the *only* consideration). Perhaps even a zip file whose 'files' are named with the desired package hierarchy, and whose contents are something like: import imp globals.update(imp.load_XXX(...).__dict__) del imp > >Actually, I'm offering a way of *registering* a package with the > >repository from the command line. I'm of the opinion that setting the > >environment via command line for the subsequent Python runs is a bad > >idea, but then again, I have been using wxPython's wxversion method for > >a while to select which wxPython installation I want to use, and find > >things like: > > > > import wxversion > > wxversion.ensureMinimal('2.6-unicode', optionsRequired=True) > > > >To be exactly the amount of control I want, where I want it. > > Well, that's already easy to do for arbitrary packages and arbitrary > versions with setuptools. Eggs installed in "multi-version" mode are added > to sys.path at runtime if/when they are requested. Why do we have to use eggs or setuptools to get a feature that *arguably* should have existed a decade ago in core Python? The core functionality I'm talking about is: packages.register(name, path, env=None, system=False, persist=False) #system==True implies persist==True packages.copy_env(fr_env, to_env) packages.use_env(env) packages.check(name, version=None) packages.use(name, version) With those 5 functions and a few tricks, we can replace all user-level .pth and PYTHONPATH use, and sys.path manipulation done in other 3rd party packages (setuptools, etc.) are easily handled and supported. > >With a package registry (perhaps as I have been describing, perhaps > >something different), all of the disparate ways of choosing a version of > >a library during import can be removed in favor of a single mechanism. > >This single mechanism could handle things like the wxPython > >'ensureMinimal', perhaps even 'ensure exact' or 'use latest'. > > This discussion is mostly making me realize that sys.path is exactly the > right thing to have, and that the only thing that actually need fixing is > universal .pth support, and maybe some utility functions for better > sys.path manipulation within .pth files. I suggest that there is no way an > arbitrary "registry" implementation is going to be faster than reading > lines from a text file. > > > > Setuptools works around this by installing an enhancement for the 'site' > > > module that extends .pth support to include all PYTHONPATH > > > directories. The enhancement delegates to the original site module after > > > recording data about sys.path that the site module destroys at startup. > > > >But wasn't there a recent discussion describing how keeping persistant > >environment variables is a PITA both during install and runtime? > > Yes, exactly. You have confused me, because not only have you just said "we use PYTHONPATH as a solution", but you have just acknowledged that using PYTHONPATH is not reasonable as a solution. You have also just said that we need to add features to .pth support so that it is more usable. So, sys.path "is exactly the right thing to have", but we need to add more features to make it better. Ok, here's a sample .pth file if we are willing to make it better (in my opinion): zope,/path/to/zope,3.2.1,netserver zope.subpackage,/path/to/subpackage,.1.1,netserver That's a CSV file with rows defining packages, and columns in order: package name, path to package, version, and a semicolon-separated list of environments that this package is available in (a leading semicolon, or a double semicolon says that it is available when no environment is specified). With a base sys.path, a dictionary of environment -> packages created from .pth files, and a simple function, one can generally develop an applicable sys.path on demand to some choose_environment() call. This is, effectively, a variant of what I was suggesting, only with a different persistance representation. > >Extending .pth files to PYTHONPATH seems to me like a hack meant to work > >around the fact that Python doesn't have a package registry. And really, > >all of the current sys.path + .pth + PYTHONPATH stuff could be subsumed > >into a *single* mechanism. > > Sure -- I suggest that the single mechanism is none other than > *sys.path*. The .pth files, PYTHONPATH, and a new command-line option > merely being ways to set it. I guess we disagree on what is meant by "single" in this context. > All of the discussion that's taken place here has sufficed at this point to > convince me that sys.path isn't broken at all, and doesn't need > fixing. Some tweaks to 'site' and maybe a new command-line option will > suffice to clean everything up quite nicely. > > I say this because all of the version and dependency management things that > people are asking about can already be achieved by setuptools, so clearly > the underlying machinery is fine. It wasn't until this message of yours > that I realized that you are trying to solve a bunch of problems that are > quite solvable within the existing machinery. I was mainly interested in > cleaning up the final awkwardness that's effectively caused by lack of .pth > support for the startup script directory. Indeed, everything is solvable within the existing machinery. But it's not a question of solvable, it's a question of can we make things better. When I have had the occasion to use .pth files, I've been somewhat disappointed. Given even the few functions I've defined for an API, or the .pth variant I described, I know I wouldn't be disappointed in trying to set up independant package version installations, application environments, etc. They all come fairly naturally. > > > I'm not sure of that, since I don't yet know how your approach would deal > > > with namespace packages, which are distributed in pieces and assembled > > > later. For example, many PEAK and Zope distributions live in the peak.* > > > and zope.* package namespaces, but are installed separately, and glued > > > together via __path__ changes (see the pkgutil docs). > > > > packages.register('zope', '/path/to/zope') > > > >And if the installation path is different: > > > > packages.register('zope.subpackage', '/different/path/to/subpackage/') > > > >Otherwise the importer will know where the zope (or peak) package exists > >in the filesystem (or otherwise), and search it whenever 'from zope > >import ...' is performed. > > If you're talking about replacing the current import machinery, you would > have to leave this to Py3K, otherwise all you've done is add a *new* import > hook, i.e. a "sys.package_loaders" dictionary or some such. It could coexist happily next to sys.path-based machinery, and it is likely easier for it to do so (replacing the sys.path bits in the core language is more work than I would be willing to do). > If you wanted something like that now, of course, you could slap an > importer into sys.meta_path that then did a lookup in > sys.package_loaders. Getting this mechanism bootstrapped, however, is left > as an exercise for the reader. ;) I just about cry every time I think about adding an import hook. If others think that this functionality has legs to stand on, I may just have to get help from experienced users. > Note, by the way, that it might be quite possible to do away with > everything but sys.meta_path in Py3K, prepopulated with such an importer > (along with ones to support builtin and frozen modules). You could then > import a backward-compatibility module that would add support for sys.path > and for package __path__ attributes, by adding a new entry to > sys.meta_path. But this is strictly a pipe dream where Python 2.x is > concerned. Indeed, actually removing sys.path from 2.x is a non-starter. But replacing user-level modifications of sys.path with calls to a registry? That seems possible, if not desireable, from a "let us not monkey patch the Python runtime" perspective. - Josiah
|
https://mail.python.org/pipermail/python-dev/2006-September/068964.html
|
CC-MAIN-2017-04
|
refinedweb
| 1,500
| 57.06
|
The numsecondaries property specifies the number of nodes within a device group that can master the group if the primary node fails. The default number of secondaries for device services is one. You can set the value to any integer between one and the number of operational nonprimary.
If you change the numsecondaries property, secondary nodes are added or removed from the device group if the change causes a mismatch between the actual number of secondaries and the desired number.
This procedure uses the clsetup utility to set the numsecondaries property for all types of device groups. Refer to cldevicegroup(1CL) for information about device group options when configuring any, select the option labeled Device groups and volumes.
The Device Groups Menu is displayed.
To change key properties of a device group, select the option labeled Change key properties of a device group.
The Change Key Properties Menu is displayed.
To change the desired number of secondaries, type the number that corresponds to the option for changing the numsecondaries property.
Follow the instructions and type the desired number of secondaries to be configured for the device group. The corresponding cldevicegroup command is then executed, a log is printed, and the utility returns to the previous menu.
Validate the device group configuration.
If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must reregister the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global-Devices Namespace.
Verify that the device group attribute has been changed.
Look for the device group information that is displayed by the following command.
The following example shows the cldevicegroup command that is generated by clsetup when it configures the desired number of secondaries for a device group (dg-schost-1). This example assumes that the disk group and volume were created previously.
The following example shows the cldevicegroup command that is generated by clsetup when it sets the desired number of secondaries for a device group (dg-schost-1) to two. See How to Set the Desired Number of Secondaries for a Device Group for information about changing the desired number of secondaries after a device group is created.
The following example shows use of a null string value to configure the default number of secondaries. The device group will be configured to use the default value, even if the default value changes.
|
http://docs.oracle.com/cd/E19787-01/820-7358/cjaiedjh/index.html
|
CC-MAIN-2016-07
|
refinedweb
| 433
| 54.12
|
Switch: Overview
A web component that is used to toggle a property or feature on or off. Toggling the component on or off should have immediate action and should not require pressing any additional buttons (submit) to confirm what just happened. Switch is not a Checkbox in disguise and should not be used as part of a form.
Features
- Get or set the checked state (boolean) -
checkedboolean attribute
- Pre-select an option by setting the
checkedboolean attribute
- Get or set the value of the choice -
choiceValue()
Installation
npm i --save @lion/switch
import { LionSwitch } from '@lion/switch'; // or import '@lion/switch/define';
|
https://lion-web.netlify.app/components/interaction/switch/overview/
|
CC-MAIN-2021-17
|
refinedweb
| 102
| 55.27
|
Root Password Readable in Clear Text with Ubuntu 520
BBitmaster writes "An extremely critical bug and security threat was discovered in Ubuntu Breezy Badger 5.10 earlier today by a visitor on the Ubuntu Forums that allows anyone to read the root password simply by opening an installer log file. Apparently the installer fails to clean its log files and leaves them readable to all users. The bug has been fixed, and only affects The 5.10 Breezy Badger release. Ubuntu users, be sure to get the patch right away."
Re:I believe this is a feature (Score:2, Informative)
Re:I believe this is a feature (Score:2, Informative)
Re:But Ubuntu has no root account! (Score:5, Informative)
Colin Watson's response was very professional (Score:4, Informative)
Re:I believe this is a feature (Score:2, Informative)
Re:okay (Score:5, Informative)
Re:Just in case (Score:3, Informative)
The password in the log file was the primary account's password. This account is a member of the sudoers group, so the same password can get you root access.
Re:Just in case (Score:3, Informative)
Preview of 5.10 Not Affected (Score:2, Informative)
For Ubuntu 5.10 users: (Score:2, Informative)
Solution (Score:5,).
Re:Saw this on Digg (Score:5, Informative).
So not only did they have a similar problem, it persisted for over a year after initially being found & alledgedly fixed..
Re:What does patch help? (Score:3, Informative)
What does this patch fix? The installer?
No, the patch removes that key from the file, and chmod's it 600.
Re:Saw this on Digg (Score:5, Informative)
Actually they reflect reality and are the result of customer requests.
In managed environments, patches are almost never applied ad-hoc, as they are released. They are collected together then tested and rolled out on a schedule, usually monthly. therefore output the "routine" log to one file and the "debug" log to a different file.
Doesn't this just go back to the same problem though? No. First, debug logs don't need to be written to quickly, because debug sessions are going to be slow anyway. Therefore you can encrypt them or otherwise make them unreadable to the casual observer. In general, you want these to be sent to the maintainer as part of a bug report in the event of an install failure, so just pre-encrypt them with the maintainer's public PGP/GPG key.
A more "correct" solution would be to assign different debug levels to different levels of logging, where your maximum level logs absolutely ALL data entered by the user, but where distributed versions are issued with much more basic logging that excludes private information that isn't likely to be useful in debugging the problem anyway.
(The ideal solution is to have maintenance debugging for logging everything as a distinct patch to the basic distribution, so the basic distribution cannot - even accidentally - log everything. That way, users don't even have to put up with obscenely inflated binaries that have lots of debug stuff that will likely never be used, and maintainers don't ever have brown-paper-bag security scares.)
Root Passwords should never be stored ANYWHERE... (Score:2, Informative)
Re:UNIX mouse driver released (Score:5, Informative)
Since long before MS-DOS had them:
Look. [wikipedia.org].
Re:So what if this was fixed quickly. (Score:4, Informative)
Re:Solution (Score:3, Informative)
Not in my logs at all (Score:2, Informative)
Ubuntu 5.10 "Breezy Badger" \n \l
I upgraded from Warty - with dist-upgrade - maybe thats my deal... apt-get update && apt-get upgrade, anyway.:Solution (Score:3, Informative)
cat
Re using Fedora since FC1, and you happened to be using it on a 586 architecture, you would have found out. Because for some reason they decided that on that architecture they would compile glibc with some options making it pretty picky about the location of the stack. This caused programs to crash at random, and the bug was never fixed. They simply wouldn't accept, that there could be a bug in glibc.
I can install Fedora and be fairly certain that even if somehow my system stopped updating
Actually that is not so unlikely to happen. Because on FC4 rhn-applet will always tell you, that there are no updates available. And occationally yum will also say that even when there are updates available. And the Fedora people does not consider this to be a bug.
And while we are at it, do you know what happens to the umask on a Fedora system? If I decide to set my umask to 077 such that other users cannot read by default, then
I'm not saying Fedora is a bad distribution, after all I do use it on all my systems. You just shouldn't claim it to be so much more secure than other distributions. Yes, this bug in Ubuntu is very bad, but unfortunately they are not the first to introduce a bug that bad.
Re:Saw this on Digg (Score:2, Informative)
Security against an attack if you have physical, unsupervised access to the box is nil, in any case. Carry a pendrive or a bootable CD containing a rescue Linux distro with you and boot from it. There, you can mess around with system config files and do things like creating your very own SSH account on the machine. Due to the way PCs work, the only way to protect your machine against attacks by someone with physical access to it is to raise a BIOS password or encrypt your files, not a bad idea in any case.
Re:[easier] Solution (Score:2, Informative)
Re:So what if this was fixed quickly. (Score:2, Informative)
Let's check your facts...
"the sky is blue" -- Well, the sky is actually black and it only appears blue because light is scattered in the atmosphere. So far you're 0 for 1.
"water is wet" -- This one is true... if you only consider its liquid form. However, its solid and gaseous forms are most definitely not wet. That makes you 0 for 2.
With a record like that, can we really believe your third so-called "fact"?:So what if this was fixed quickly. (Score:2, Informative)
Let me guess: American, right? Only an American can be this bad at science.
A black sky is the way it is. Ever see that thing they call "space"? You'll see the sky is black. The aforementioned scattering of light in our atmostphere makes it look blue during the day, but the sky itself is black. Consult any primary school science class for further details.
Water is the name of a chemical compound, also known as Dihydrogen monoxide. The phase doesn't change what it is, it is still water, the same way liquid nitrogen is still nitrogen. If that doesn't satisfy you, there is solid water that is not ice. It is amorphous solid water. And gaseous water is also called water vapor. Notice how both of those specifically mention that they are water.
Thanks for trying. Get an primary school education before trying again.
Brilliant use of an irrelevant last line, by the way.
Re:Saw this on Digg (Score:2, Informative)
Firstly owning up and making changes:
.)" - Colin Watson
Second quote:
"We've never updated the ISO images for any released Ubuntu distributions. We don't intend to, either, unless some terrifying and unforeseen showstopper arises." -CJW
Terrifying showstopper?? You mean like this one?! This could affect their reputation for years. I'd destroy all CDs affected. It's one thing to screw up. It something different to knowingly mail that CD to another unsuspecting user.
Re:Solution (Score:1, Informative)
Re:So what if this was fixed quickly. (Score:3, Informative)
For the record:
I'm happy to take responsibility for the lack of testing that meant we didn't spot this earlier, but it's not quite the trivial stupid mistake that people are making it out to be.
Re:Choose strong obscure passwords (Score:1, Informative)
Re:Open Password! (Score:2, Informative)
Re:[easier] Solution (Score:1, Informative)
Or before typing sensitive info, then when finished. That way the history file isn't flushed, just the relevant entries.:Open Password! (Score:3, Informative)
Dunno - presumably it's long been in any password cracker out there? Along with "none" or "password" or any other "clever" password there is?
Re:Patch mirror (Score:4, Informative)
Re:Choose strong obscure passwords (Score:2, Informative)
Oh yeah!
typedef struct {
unsigned int len;
char *content;
} String;
Re:Choose strong obscure passwords (Score:3, Informative)
How about
#include <string> ? Radical, I know, but you have to put strings that contain their length and can contain nul somewhere!
Re:Choose strong obscure passwords (Score:2, Informative)
Re:[easier] Solution (Score:2, Informative)
If you wish to change root's pass, you need to 'sudo passwd root' or 'sudo su -;passwd'
Re:Real Solution: CHANGE YOUR PASSWORD (Score:2, Informative)
Additionally, this should only happen if you're performing an expert install; the normal installation procedure doesn't seem to have this problem.
The installer maintainer (Colin Watson) has said two things that may (or may not) be of interest:
I don't see how this is happening, because we deliberately db_set those questions to empty after retrieving the password to avoid this problem.
So I guess that didn't work on some install types. The other, which addresses your question about Breezy install CDs:.
Re:[easier] Solution (Score:2, Informative)
Where does this idea that you need to type "sudo passwd root" come from? I see it repeated in IRC channels and message boards, but it's just not true.conf/questions.dat
Re:Saw this on Digg (Score:2, Informative). None of these are possible with a blank password on the target account.
|
https://slashdot.org/story/06/03/13/0525254/root-password-readable-in-clear-text-with-ubuntu/informative-comments
|
CC-MAIN-2017-43
|
refinedweb
| 1,661
| 64.81
|
Kotlin Mega Tutorial
A Productive and Pragmatic Language
A programming language is usually designed with a specific purpose in mind. This purpose can be anything from serving a specific environment (e.g, the web) to a certain paradigm (e.g. functional programming). In the case of Kotlin the goal is to build a productive and pragmatic language, that has all the features that a developer needs and makes easy to use them.
Kotlin was initially designed to work with other JVM languages, but it is now evolved to be much more: it also works in the browser and as a native application.
This is what Dmitry Jemerov, development lead at JetBrains said about their choice of creating Kotlin:
We’ve looked at all of the existing JVM languages, and none of them meet our needs. Scala has the right features, but its most obvious deficiency is very slow compilation
For a more in-depth comparison between Scala and Kotlin you can see our article: Kotlin vs Scala: Which Problems Do They Solve?
Why not keeping using Java? Java is a good language but has a series of issues because of its age and success: it needs to maintain backward compatibility with a lot of old code and it suffers from old design principles.
Kotlin is multi-paradigm, with support for object-oriented, procedural and functional programming paradigms, without forcing to use any of them. For example, contrary to Java, you can define functions as top-level, without having to declare them inside a class.
In short, Kotlin was designed to be a better Java, that takes all the best practices and add a few innovations to make it the most productive JVM language out there.
A Multi-platform Language from the Java World
It is no secret that Kotlin comes from the Java world. JetBrains, the company behind the language has long experience in developing tools for Java and they specifically thought about the issues in developing with Java. In fact, one of the core objectives of Kotlin is creating code that is compatible with an existing Java code base. One reason is because the main demographics of Kotlin users are Java developers in search of a better language. However, it is inter-operable also to support working with both languages. You can maintain old code in Java e create a new project in Kotlin without issues.
It has been quite successful in that and this is the reason because Google has chosen to officially support Kotlin as a first-class language for Android development.
The main consequence is that, if you come from the Java world, you can immediately start using Kotlin in your existing Java projects. Also, you will found familiar tools: the great IntelliJ IDEA IDE and build tools likes Gradle or Maven. If you have never used Java that is still good news: you can take advantage of a production ready infrastructure.
However, Kotlin is not just a better Java that also has special support for Android. You can also use Kotlin in the browser, working with existing JavaScript code and even compile Kotlin to a native executable, that can take advantage of C/C++ libraries. Thanks to native executables you can use Kotlin on an embedded platform.
So, it does not matter which platform you use or which language you already know, if you search for a productive and pragmatic language Kotlin is for you.
A Few Kotlin Features
Here there are a quick summary of Kotlin features:
- 100% interoperable with Java
- 100% compatible with Java 6 and so can you can create apps for most Android devices
- Runs on the JVM, it can be transpiled to JavaScript and can even run native, with interoperability with C and Objective-C (macOs and iOS) libraries
- There is no need to end statements with a semicolon;
. Blocks of code are delimited by curly brackets{ }
- First-class support for constant values and immutable collections (great for parallel and functional programming)
- Functions can be top-level elements (i.e., there is no need to put everything inside a class)
- Functions are first-class citizens: they can be passed around just like any other type and used as argument of functions. Lambda (i.e., anonymous functions) are greatly supported by the standard library
- There is no keywordstatic
, instead there are better alternatives
- Data classes, special classes designed to hold data
- Everything is an expression:if
,for
, etc. they can all return values
- Thewhen
expression is like a switch with superpowers
Table of Contents
The companion repository for this article is available on GitHub
Basics of Kotlin
- Variables and Values
- Types
- Nullability
- Kotlin Strings are Awesome
- Declaring and Using Functions
- Classes
- Data Classes
- Control Flow
- The Great when Expression
- Dealing Safely with Type Comparisons
- Collections
- Exceptions
- A Simple Kotlin Program
Advanced Kotlin
Setup
As we mentioned, you can use Kotlin on multiple platforms and in different ways. If you are still unsure if you want to develop with Kotlin you can start with the online development environment: Try Kotlin. It comes with a few exercises to get the feel of the language.
In this setup section, we are going to see how to setup the most common environment to build a generic Kotlin application. We are going to see how to setup IntelliJ IDEA for Kotlin development on the JVM. Since Kotlin is included in all recent versions of IntelliJ IDEA you just have to download and install the IDE.
You can get the free Community edition for all platforms: Windows, MacOS and Linux.
Basic Project
You can create a new Kotlin project very easily, just launch the wizard and choose the template.
Then you have to fill the details of your project, the only required value is the
, you can leave the other settings to their default value.
Now that you have a project you can look at its structure and create a Kotlin file inside the
folder.
This is the basic setup, which is good for when you need to create a pure Kotlin project with just your own code. It is the ideal kind of project for your initial Kotlin programs, since it is easier and quicker to setup.
Gradle Project
In this section we are going to see how to create a Gradle project. This is the kind of project you are going to use the most with real projects, since it easily allows to mix Java and Kotlin code, both your own code and libraries from other people. That is because Gradle facilitates download and use existing libraries, instead of having to download them manually.
You can create a new Gradle project quite easily, just launch the wizard, choose the Gradle template and select Kotlin (Java) in the section Additional Libraries and Frameworks.
Then you have to fill the naming details of your project, needed for every Gradle project. You have to indicate a name for your organization (
) and for the specific project (
). In this example we choose the names strumenta (i.e., the company behind SuperKotlin) for our organization and books for our project.
Then you have to specify some Gradle settings, but you can usually just click Next for this stage.
Finally, you have to fill the details of your project, the only required value is the
, you can leave the other settings to their default value. The value should already be filled, with the
value chose in one of the preceding steps.
Now that you have a project you can look at its structure and create a Kotlin file inside the
folder. Given that you can mix Java and Kotlin code there are two folders: one for each language. The structure of a Java project is peculiar: in a Java project, the hierarchy of directories matched the package structure (i.e., the logical organization of the code). For example, if a Java file is part of a package
it will be inside the folders
.
With Kotlin you do not have to respect this organization, although it is the recommended if you plan to use both Java and Kotlin. Instead if you just use Kotlin you should use whatever structure your prefer.
Adding Kotlin Code
Inside the new Kotlin file you can create the main routine/function. IntelliJ IDEA comes with a template, so you simply need to write
and press tab to have it appear.
When you code is ready, you can compile it and run the program with the proper menu or clicking the icon next to main function.
And that is all you need to know to setup and start developing with Kotlin.
Basics of Kotlin
In this section we explain the basic elements of Kotlin. You will learn about the basic elements needed to create a Kotlin program: definining variables, understanding the type system, how Kotklin supports nullability and how it deals with strings. You will learn how to us the builing blocks like control flow expressions, functions, classes and how to define the special Kotlin classes known as data classes. Finally, we will put everything together to create a simple program.
Variables and Values
Kotlin is a multi-paradigm language, which includes good support for functional programming. It is a completely functional language like Haskell, but it has the most useful features of functional programming. An important part of this support is that constant values are first class citizens of Kotlin, just like normal variables.
Constants are called value and are declared using the keyword
.
val three = 3
This is as simple as declaring a variable, the difference, of course, is that to declare a variable you use the keyword
.
var number = 3
If you try to reassign a value you get a compiler error:
Val cannot be reassigned
This first-class support for values is important for one reason: functional programming. In functional programming the use constant values allow some optimizations that increase performance. For instance, calculations can be parallelized since there is a guarantee that the value will not change between two parallel runs, given that it cannot change.
As you can see, in Kotlin there is no need to end statements with a semicolon, a newline is enough. However, adding them is not an error: it is no required, but it is allowed. Though remember that Kotlin is not Python: blocks of code are delimited by curly braces and not indentation. This is an example of the pragmatic approach of the language.
Note: outside this chapter, when we use the term variable we usually also talk about value, unless explicitly noted.
Types
Kotlin is statically-typed language, which means that types of any variable must be determined at compilation time. Up until now, we have declared variables and values without indicating any type, because we have provided an initialization value. This initialization allows the Kotlin compiler to automatically infer the type of the variable or value.
Obviously, you can explicitly assign a type to a variable. This is required, if you do not provide an initialization value.
val three: Int = 3
In this example, the value three has the type
.
Whether you explicitly indicate the type of a value or not, you always have to initialize it, because a value cannot be changed.
The types available in Kotlin are the usual ones: Char, String, Boolean, several types of numbers.
There are 4 types of natural numbers: Byte, Short, Int, Long.
A Long literal must end with the suffix
.
There are also two types for real numbers: Float and Double.
Float and Double literals use different formats, notice the suffix
for float.
Since Kotlin 1.1 you can use underscore between digits, to improve readability of large numeric literals.
For example, this is valid code.
var large = 1_000_000
A Type is Fixed
Even when you are not explicitly indicating the type, the initial inferred type is fixed and cannot be changed. So, the following is an error.
var number = 3 // this is an error number = "string"
This remains true even if there is a type that could satisfy both the initial assignment and the subsequent one.
var number = 1 // this is an error number = 2.0
In this example number has the type
, but the second assignment has the type
. So, the second assignment is invalid, despite the fact that an Int value could be converted to a Double. So, if number was a Double it could accept integer numbers.
In fact, to avoid similar errors, Kotlin is quite strict even when inferring the type to literals. For example, an integer literal will always just be an integer literal and it will not be automatically converted to a double.
var number = 1.0 // this is an error number = 2
In the previous example the compiler will give you the following error, relative to the second assignment:
The integer literal does not conform to the expected type Double
However, complex expressions, like the following one, are perfectly valid.
var number = 1.0 number = 2 + 3.0
This works because the whole addition expression is of type
. Even though the first operand of the addition expression is an
, that is automatically converted to
.
Kotlin usually prefers a pragmatic approach, so this strictness can seem out of character. However, Kotlin has another design principle: to reduce the number of errors in the code. This is the same principle that dictates the Kotlin approach to nullability.
Nullability
In the last few years best practices suggest being cautious in using null and nullable variables. That is because using null references is handy but prone to errors. It is handy because sometimes there is not a meaningful value to use to initialize a variable. It is also useful to use a null value to indicate the absence of a proper value. However, the issue is that sometimes the developer forgets to check that a value is valid, so you get bugs.
Tony Hoare, the inventor of the null reference call it its billion-dollar mistake:.
That is why in Kotlin, by default, you must pay attention when using null values. Whether it is a string, an array or a number, you cannot assign a null value to a variable.
var text: String = "Test" text = "Changing idea" // this is an error text = null
The last assignment will make the compiler throw an error:
Null can not (sic) be a value of a non-null type String
As this error indicates, you cannot use null with standard types, but there is a way to use null values. All you have to do is indicating to the compiler that you want to use a nullable type. You can do that by adding a
at the end of a type.
var text: String = null // it does not compile var unsafeText: String? = null // ok
Nullability Checks
Kotlin takes advantage of the nullability or, at the opposite, the safeness of types, at all levels. For instance, it is taken into consideration during checks.
val size = unsafeText.length // it does not compile because it could be null if (unsafeText != null) { val size = unsafeText.length // it works, but it is not the best way }
When using nullable type you are required to check that the variable currently has a valid value before accessing it. After you have checked that a nullable type is currently not null, you can use the variable as usual. You can use as if it were not nullable, because inside the block it is safe to use. Of course, this also looks cumbersome, but there is a better way, that is equivalent and more concise.
val size = unsafeText?.length // it works
The safe call operator (
) guarantees that the variable will be accessed only if
it is not null. If the variable is null then the safe call operator returns null. So, in this example the type of the variable size would be
.
Another operator related to null-values is the elvis operator (
). If whatever is on the left of the elvis operator is not null then the elvis operator returns whatever is on the left, otherwise it returns what is on the right.
val len = text?.length ?: -1
This example combines the safe-call operator and the elvis operator:
- if text is not null (safe-call) then on the left there will be the length of the string text, thus the elvis operator will returntext.length
- if text isnull
(safe-call) then on the left there will benull
, then the elvis operator will return-1
.
Finally, there is the non-null assertion operator (
). This operator converts any value to the non-null corresponding type. For example, a variable of type
becomes of a value of type
. If the value to be converted is null then the operator throws an exception.
// it prints the length of text or throws an exception depending on whether text is null or not println(text!!.length)
This operator has to be used with caution, only when you are absolutely certain that the expression is not null.
Kotlin Strings are Awesome
Kotlin strings are powerful: they come with plenty of features and a few variants.
Strings are immutable, so whenever you are modifying a new string you are actually creating a new one. The elements of a string can be accessed with the indexing operator (
).
var text:String = "Kotlin is awesome" // it prints K println(text[0])
You can escape some special characters using a backslash. The escape sequences supported are:
,
,
,
,
,
,
and
. You can use the Unicode escape sequence syntax to input any character by referencing its code point. For example,
is equivalent to
.
You can concatenate strings using the + operator, as in many other languages.
var text:String = "Kotlin" // it prints "Kotlin is awesome" println(text + " is awesome")
However, there is a better way to concatenate them: string templates. These are expressions that can be used directly inside a string and are evaluated, instead of being printed as they are. These expressions are prefixed with a
. If you want to use arbitrary expression, you have to put them inside curly braces, together with the dollar sign (e.g.,
).
fun main(args: Array<String>) { val who: String = "john" // simple string template expression // it prints "john is awesome" println("$who is awesome") // arbitrary string template expression // it prints "7 is 7" println("${4 + 3} is 7") }
This feature is commonly known as string interpolation. It is very useful, and you are going to use it all the time.
However, it is not a panacea. Sometimes you have to store long, multi-line text, and for that normal strings are not good, even with templates. In such cases you can use raw strings, delimited by triple double quotes
.
val multiline = """Hello, I finally wrote this email. Sorry for the delay, but I didn't know what to write. I still don't. So, bye $who."""
Raw strings support string templates, but not escape sequences. There is also an issue due to formatting: given that the IDE automatically indent the text, if you try to print this string you are going to see the initial whitespace for each line.
Luckily Kotlin include a function that deals with that issue:
. This function will remove all leading whitespace up until the character you used as argument. It will also remove the character itself. If you do not indicate any character, the default one used is
.
val multiline = """Hello, |I finally wrote the email. |Sorry for the delay, but I didn't know what to write. |I still don't. |So, bye $who.""".trimMargin()
This will create a string without the leading whitespace.
Declaring and Using Functions
To declare a function, you need to use the keyword
followed by an identifier, parameters between parentheses, the return type. Then obviously you add the code of the function between curly braces. The return type is optional. If it is not specified it is assumed that the function does not return anything meaningful.
For example, this is how to declare the main function.
fun main(args: Array<string>) { // code here }
The main function must be present in each Kotlin program. It accepts an array of strings as parameter and returns nothing. If a function returns nothing, the return type can be omitted. In such cases the type inferred is
. This is a special type that indicates that a function does not return any meaningful value, basically is what other languages call void.
So, these two declarations are equivalent.
fun tellMe(): Unit { println("You are the best") } // equivalent to the first one fun tell_me() { println("You are the best") }
As we have seen, functions can be first-class citizens: in Kotlin classes/interfaces are not the only first-level entities that you can use.
If a function signature indicates that the function returns something, it must actually return something with the proper type using the
keyword.
fun tellMe(): String { return "You are the best" }
The only exception is when a function return
. In that case you can use return or return Unit, but you can also omit them.
fun tellNothing(): Unit { println("Don't tell me anything! I already know.") // either of the two is valid, but both are usually omitted return return Unit }
Function parameters have names and types, types cannot be omitted. The format of an argument is: name followed by colon and a type.
fun tell(who: String, what: String): String { return "$who is $what" }
Function Arguments
In Kotlin, function arguments can use names and default values. This simplifies reading and understanding at the call site and allows to limit the number of function overloads. Because you do not have to make a new function for every argument that is optional, you just put a default value on the definition.
fun drawText(x: Int = 0, y: Int = 0, size: Int = 20, spacing: Int = 0, text: String) { [..] }
Calling the same function with different arguments.
// using default values drawText("kneel in front of the Kotlin master!") // using named arguments drawText(10, 25, size = 20, spacing = 5, "hello")
Single-Expression Functions
If a function returns a single expression, the body of the function is not indicated between curly braces. Instead, you can indicate the body of the function using a format like the assignment.
fun number_raised_to_the_power_of_two(number: Int) = number * number
You can explicitly indicate the type returned by a single-expression function. However, it can also be omitted, even when return something meaningful (i.e. not
) since the compiler can easily infer the type of the expression returned.
Classes
Classes are essentially custom types: a group of variables and methods united in a coherent structure.
Classes are declared using the keyword class followed by a name and a body.
class Info { [..] }
How do you use a class? There is no keyword
in Kotlin, so to instantiate an object you just call a constructor of the object.
val info = Info()
Properties
You cannot declare fields directly inside a class, instead you declare properties.
class Info { var description = "A great idea" }
These properties look and behave suspiciously like simple fields: you can assign values to them and access their values just like any other simple variable.
val info = Info() // it prints "A great idea" println(info.description) info.description = "A mediocre idea" // it prints "A mediocre idea" println(info.description)
However, behind the scenes the compiler converts them to properties with a hidden backing field. I.e., each property has a backing field that is accessible through a getter and a setter. When you assign a value to a property the compiler calls the setter. and when you read its value the compiler calls a getter to obtain it. In fact, you can alter the default behavior of a property and create a custom getter and/or setter.
Inside these custom accessors you can access the backing field using the identifier
. You can access the value passed to the setter using the identifier
. You cannot use these identifiers outside the custom accessors.
class Info { var description = "A great idea" var name: String = "" get() = ""$field"" set(value) { field = value.capitalize() } }
Let’s see the property name in action.
info.name = "john" // it prints "John" (quotes included) println(info.name)
Kotlin offers the best of both worlds: you can automatically have properties, that can be used as easily as simple fields, but if you need soem special behavior you can also create custom accessors.
Constructors
If the class has a primary constructor it can be into the class header, following the class name. It can also be prefixed with the keyword
. A primary constructor is one that is always called, either directly or eventually by other constructors. In the following example, the two declarations are equivalent.
// these two declarations are equivalent class Info (var name: String, var number: Int) { } class Info constructor (var name: String, var number: Int) { }
You cannot include any code inside the primary constructor. Instead, if you need to do any initialization, you can use initializer blocks. There can be many initializer blocks; they are executed in the order in which they are written. This means that they are not all executed before the initialization of the object, but right where they appear.
class Info (var name: String, var number: Int) { init { println("my name is $name") } var description = "A great idea for $name" init { name = "Nemo" println("my name is $name") } } // into the main function fun main(args: Array<String>) { val info = Info("John", 5) }
When the program executes it will prints the two strings in the order in which they appear.
If you try to use the description property in an initializer blocks that is before the property is defined you will get an error.
A class can also have secondary constructors, which can be defined with the keyword
. Secondary constructors must eventually call the primary constructor: they can do that directly or through another secondary constructor.
class Info (var name: String, var number: Int) { constructor(name: String) : this(name, 0) { this.name = name.capitalize() } }
There are a couple of interesting things going on in this example: we see how to call a primary constructor and an important difference between primary and secondary constructors. To call a primary constructor you use the
keyword and supply the argument to the constructor after the constructor signature. The important difference between secondary and primary constructors is that the parameters of primary constructors can define properties while the parameters of a secondary constructor are always just parameters.
If the parameters of a primary constructor are also properties they will be accessible throughout all the lifecycle of the object, just like normal properties. While, if they are simple parameters, they are obviously accessible only inside the constructor, just like any other parameter of a function.
You can automatically define a property with a parameter of a primary constructor simply putting the keywords
or
in front of the parameter.
In this example, the primary constructor of the first class defines properties, while the second does not.
// class with primary constructor that defines properties class Info (var name: String, var number: Int) // class with primary constructor that does not define properties class Info (name: String, number: Int)
Inheritance
A class can inherit from another base class to get properties and functions of the base class. This way you can avoid repetition and build an hierarchy of classes that goes from the most generic to the most precise. In Kotlin, a class can only inherits from one other base class.
If a class does include an explicit base class, it implicitly inherits from the superclass
.
// it implicitly inherits from Any class Basic
The class
has only a few basic methods, like
and
.
A derived class must call a constructor of the base class. This call can happen in different ways and it is reflected in the syntax of a class. The syntax to make a class derive from another requires to add, after the name of the derived class, a colon and a reference to the base class. This reference can be either the name of the class or a constructor of the base class.
The difference depends on whether the derived class has a primary constructor or not. If the derived class has no primary constructor, it needs to call a constructor of the base class in its secondary constructors, otherwise it can call it directly in its primary constructor.
Let’s see a few examples to clarify this statement.
// the derived class has no primary constructor class Derived : Base { // calling the base constructor with super() constructor(p: Int) : super() { } } // the derived class has a primary constructor class Derived(p: Int) : Base(p)
You cannot use
(used to call the base constructor) inside the code of the constructor. In other words, super is not a normal expression or function.
Notice that if the derived class has a primary constructor you must call the constructor of the base class there. You cannot call it later in a secondary constructor. So, there are two alternatives, but there is no choice: you have to use one or the other depending on the context.
Create a Base Class and Overriding Elements
Kotlin requires an explicit syntax when indicating classes that can be derived from.
This means that this code is wrong.
class NotABase(p: Int) class Derived(p: Int) : NotABase(p)
It will show the following error:
This type is final, so it cannot be inherited from
You can only derive from a class if the class is explicitly marked as
.
open class Base(p: Int) class Derived(p: Int) : Base(p)
You also need an explicit syntax when a class has elements that can be overridden. For example, if you want to override a method or a property of a base class. The difference is that you have both to use the modifier
on the element of the base class and the modifier
on the element of the derived class. The lack of either of these two modifiers will result in an error.
open class Base(p: Int) { open val text = "base" open fun shout() {} } class Derived(p: Int) : Base(p) { override val text = "derived" override fun shout() {} }
This approach makes for a clearer and safer design. The official Kotlin documentation says that the designers chose it because of the book Effective Java, 3rd Edition, Item 19: Design and document for inheritance or else prohibit it.
Data Classes
Frequently the best way to group semantically connected data is to create a class to hold it. For such a class, you need a a few utility functions to access the data and manipulate it (e.g., to copy an object). Kotlin includes a specific type of class just for this scope: a data class.
Kotlin gives all that you typically need automatically, simply by using the
keyword in front of the class definition.
data class User(val name: String, var password: String, val age: Int)
That is all you need. Now you get for free:
- getters and setters (these only for variable references) to read and write all properties
- component1()
..componentN()
for all properties in the order of their declaration. These are used for destructuring declarations (we are going to see them later)
- equals()
,hashCode()
andcopy()
to manage objects (ie. compare and copy them)
- toString()
to output an object in the human readable formName_of_the_class(Name_of_the_variable=Value_of_the_variable, [..])"
For example, given the previous data class
val john = User("john","secret!Shhh!", 20) // it prints "john" println(john.component1()) // mostly used automagically in destructuring declaration like this one val (name, password, age) = john // it prints 20 println(age) // it prints "User(name=john, password=secret!Shhh!, age=20)" println(john)
It is a very useful feature to save time, especially when compared to Java, that does not offer a way to automatically create properties or compare objects.
There are only a few requirements to use a data class:
- the primary constructor must have at least one parameter
- all primary constructor parameters must be properties (i.e., they must be preceded by avar
orval
)
Control Flow
Kotlin has 4 control flow constructs:
,
,
and
. If and when are expressions, so they return a value; for and while are statements, so they do not return a value. If and when can also be used as statements, that is to say they can be used standalone and without returning a value.
If
An if expression can have a branch of one statement or a block.
var top = 0 if (a < b) top = b // With else and blocks if (a > b) { top = a } else { top = b }
When a branch has a block, the value returned is the last expression in the block.
// returns a or b val top = if (a > b) a else b // With blocks // returns a or 5 var top = if (a > 5) { println("a is greater than 5") a } else { println("5 is greater than a") 5 }
Given that if is an expression there is no need for a ternary operator (condition ? then : else), since an if with an else branch can fulfill this role.
For and Ranges
A for loop iterates through the elements of a collection. So, it does not behave like a for statement in a language like C/C++, but more like a foreach statement in C#.
The basic format of a for statement is like this.
for (element in collection) { print(element) }
To be more precise, the Kotlin documentation says that
iterates through everything that provides an iterator, which means:
- has a member- or extension-function iterator(), whose return type
- contains a member or extension-function next(), and
- contains a member or extension-function hasNext() that returns Boolean.
The Kotlin for can also behave like a traditional for loop, thanks to range expressions. These are expressions that define a list of elements between two extremes, using the operator
.
// it prints "1 2 3 4 " for (e in 1..4) print("$e ")
Range expressions are particularly useful in for loops, but they can also be used in if or when conditions.
if (a in 1..5) println("a is inside the range")
Ranges cannot be defined in descending order using the
operator. The following is valid code (i.e., the compiler does not show an error), but it does not do anything.
// it prints nothing for (e in 4..1) print("$e ")
Instead, if you need to define an iteration over a range in descending order, you use
in place of the
.
// it prints "4 3 2 1 " for (e in 4 downTo 1) print("$e ")
There are also other functions that you can use in ranges and for loops:
and
. The first one dictates the amount you add or subtract for the next loop; the second one indicates an exclusive range, i.e., the last number is not included in the range.
// it prints "6 4 2 " for (e in 6 downTo 1 step 2) print("$e ") // it prints "1 6 " for (e in 1..10 step 5) print("$e ") // it prints "1 2 " for (e in 1 until 3) print("$e ")
While
The while and do .. while statements works as you would expect.
while (a > 0) { a-- } do { a-- print("i'm printing") } while (a > 0)
The when statement is a great feature of Kotlin that deserves its own section.
The Great when Expression
In Kotlin
replaces and enhances the traditional switch statement. A traditional switch is basically just a statement that can substitute a series of simple if/else that make basic checks. So, you can only use a switch to perform an action when one specific variable has a certain precise value. This is quite limited and useful only in a few circumstances.
Instead when can offer much more than that:
- can be used as an expression or a statement (i.e., it can return a value or not)
- has a better and safer design
- can have arbitrary condition expressions
- can be used without an argument
Let’s see an example of all of these features.
A Safe and Powerful Design
First of all, when has a better design. It is more concise and powerful than a traditional switch.
when(number) { 0 -> println("Invalid number") 1, 2 -> println("Number too low") 3 -> println("Number correct") 4 -> println("Number too high, but acceptable") else -> println("Number too high") }
Compared to a traditional switch, when is more concise:
- no complex case/break groups, only the condition followed by->
- it can group two or more equivalent choices, separating them with a comma
Instead of having a
branch, when has an
branch. The else branch branch is required if when is used as an expression. So, if when returns a value, there must be an else branch.
var result = when(number) { 0 -> "Invalid number" 1, 2 -> "Number too low" 3 -> "Number correct" 4 -> "Number too high, but acceptable" else -> "Number too high" } // with number = 1, it prints "when returned "Number too low"" println("when returned "$result"")
This is due to the safe approach of Kotlin. This way there are fewer bugs, because it can guarantee that when always assigns a proper value.
In fact, the only exception to this rule is if the compiler can guarantee that when always returns a value. So, if the normal branches cover all possible values then there is no need for an else branch.
val check = true val result = when(check) { true -> println("it's true") false -> println("it's false") }
Given that check has a
type it can only have to possible values, so the two branches cover all cases and this when expression is guaranteed to assign a valid value to result.
Arbitrary Condition Branches
The when construct can also have arbitrary conditions, not just simple constants.
For instance, it can have a range as a condition.
var result = when(number) { 0 -> "Invalid number" 1, 2 -> "Number too low" 3 -> "Number correct" in 4..10 -> "Number too high, but acceptable" !in 100..Int.MAX_VALUE -> "Number too high, but solvable" else -> "Number too high" }
This example also shows something important about the behavior of when. If you think about the 5th branch, the one with the negative range check, you will notice something odd: it actually covers all the previous branches, too. That is to say if a number is 0, is also not between 100 and the maximum value of Int, and obviously the same is true for 1 or 6, so the branches overlap.
This is an interesting feature, but it can lead to confusion and bugs, if you are not aware of it. The compiler solves the ambiguity by looking at the order in which you write the branches. The construct when can have branches that overlap, in case of multiple matches the first branch is chosen. Which means that is important to pay attention to the order in which you write the branches: it is not irrelevant, it has meaning and can have consequences.
The range expressions are not the only complex conditions that you can use. The when construct can also use functions, is expressions, etc. as conditions.
fun isValidType(x: Any) = when(x) { is String -> print("It's a string") specialType(x) -> print("It's an acceptable type") else -> false }
The Type of a when Condition
In short, when is an expressive and powerful construct, that can be used whenever you need to deal with multiple possibilities.
What you cannot do, is using conditions that return incompatible types. In a condition, you can use a function that accepts any argument, but it must return a type compatible with the type of the argument of the when construct.
For instance, if the argument is of type
you can use a function that accepts any number of arguments, but it must returns an
. It cannot return a String or a Boolean.
var result = when(number) { 0 -> "Invalid number" // OK: check returns an Int check(number) -> "Valid number" // OK: check returns an Int, even though it accepts a String argument checkString(text) -> "Valid number" // ERROR: not valid false -> "Invalid condition" else -> "Number too high" }
In this case, the
condition is an example of an invalid condition, that you cannot use with an argument of type Int.
Using when Without an Argument
The last interesting feature of when is that you can use it without an argument. In such case it acts as a nicer if-else chain: the conditions are Boolean expressions. As always, the first branch that matches is chosen. Given that these are boolean expression, it means that the first condition that results
is chosen.
when { number > 5 -> print("number is higher than five") text == "hello" -> print("number is low, but you can say hello") }
The advantage is that a when expression is cleaner and easier to understand than a chain of if-else statements.
If you want to know more about when you can read a whole article about it: Kotlin when: A switch with Superpowers.
Dealing Safely with Type Comparisons
To check if an object is of a specific type you can use the
expression (also known as is operator). To check that an object is not of a certain typee you can use the negated version
.
if (obj is Double) println(obj + 3.0) if (obj !is String) { println(obj) }
If you need to cast an object there is the
operator. This operator has two forms: the safe and unsafe cast.
The unsafe version is the plain
. It throws an exception if the cast is not possible.
val number = 27 // this throws an exception because number is an Int var large = number as Double
The safe version
instead returns
in case the cast fails.
val number = 27 var large: Double? = number as? Double
In this example the cast returns
, but now it does not throw an exception. Notice that the variable that holds the result of a safe cast must be able to hold a null result. So, the following will not compile.
val number = 27 // it does not compile because large cannot accept a null value var large: Double = number as? Double
The compile will show the following error:
Type mismatch: inferred type is Double? but Double was expected
On the other hand, is perfectly fine to try casting to a type that it cannot hold null. So, the
part of the previous example is valid code. In other words, you do not need to write
. As long as the variables that holds the result can accept a null, you can try to cast to any compatible type. The last part is important, because you cannot compile code that tries to cast to a type that it cannot be accepted.
So, the following example is an error and does not compile.
var large: Double? = number as? String
The following does, but large is inferred to be of type
.
var large = number as? String
Smart Casts
Kotlin is a language that takes into account both safety and the productivity, we have already seen an example of this attitude when looking at the when expression. Another good example of this approach are smart casts: the compiler automatically inserts safe casts if they are needed, when using an is expression. This saves you from the effort of putting them yourself or continually using the safe call operator (
).
A smart cast works with if, when and while expressions. That is an example with when.
when (x) { is Int -> print(x + 1) is String -> print(x.length + 1) is IntArray -> print(x.sum()) }
If it were not for smart casts you would have to do the casting yourself or to use the safe call operator. That is how you would have to write the previous example.
when (x) { is Int -> { if (x != null) print(x + 1) } is String -> print(x?.length + 1) is IntArray -> print(x?.sum()) }
Smart casts works also on the right side of an and (
) or or (
) operator.
// x is automatically cast to string for x.length > 0 if (x is String && x.length > 0) { print(x.length) }
The important thing to remember is that you cannot use smart casts with variable properties. That is because the compiler cannot guarantee that they were not modified somewhere else in the code. You can use them with normal variables.
Collections
Kotlin supports three standard collections:
,
and
:
- List is a generic collection of elements with a precise order
- Set is a generic collection of unique elements without a defined order
- Map is a collection of pairs of key-value pairs
A rather unique feature of Kotlin is that collections comes both in a mutable and immutable form. This precise control on when and which collections can be modified is helpful in reducing bugs.
Collections have a standard series of functions to sort and manipulate them, such as
, or
. They are quite obvious and we are not going to see them one by one. However, we are going to see the most powerful and Kotlin-specific functions later in the advanced section.
Lists
val numbers: MutableList<Int> = mutableListOf(1, 2, 3) val fixedNumbers: List<Int> = listOf(1, 2) numbers.add(5) numbers.add(3) println(numbers) // it prints [1, 2, 3, 5, 3]
A list can contain elements of the same type in the order in which they are inserted. It can contain identical elements. Notice that there is no specific syntax to create a list, a set or a map, you need to use the appropriate function of the standard library. To create a
you use
, to create an immutable
you use
.
Sets
val uniqueNumbers: MutableSet<Int> = mutableSetOf(1,3,2) uniqueNumbers.add(4) uniqueNumbers.add(3) println(uniqueNumbers) // it prints [1, 3, 2, 4]
A set can contain elements of the same type in the order in which they are inserted, but they must be unique. So, if you try again an element which is already in the set, the addition is ignored.
Notice that there is no specific syntax to create a list, a set or a map, you need to use the appropriate function of the standard library. To create a
you use
, to create an immutable
you use
.
There are also other options to create a set, such as
or
. These functions may have different features (e.g., the elements are always sorted) or be backed by different elements (e.g., an HashMap).
Maps
val map: MutableMap = mutableMapOf(1 to "three", 2 to "three", 3 to "five") println(map[2]) // it prints "three" for((key, value) in map.entries) { println("The key is $key with value $value") } // it prints: // The key is 1 with value three // The key is 2 with value three // The key is 3 with value five
A map is like an associative array: each element is a pair made up of a key and an associated value. The key is unique for all the collection. Notice that there is no specific syntax to create a list, a set or a map, you need to use the appropriate function of the standard library. To create a
you use
, to create an immutable
you use
.
You can easily iterate through a map thanks to the property entries which returns a collection of keys and values.
Immutable and Read-Only Collections
Kotlin does not distinguish between an immutable collection and a read-only view of a collection. This means, for instance, that if you create a
from a
you cannot change the list directly, but the underlying list can change anyway. This could happen if you modify it through the
. This holds true for all kinds of collections.
val books: MutableList<String> = mutableListOf("The Lord of the Rings", "Ubik") val readOnlyBooks: List<String> = books books.add("1984") // it does not compile readOnlyBooks.add("1984") // however... println(readOnlyBooks) // it prints [The Lord of the Rings, Ubik, 1984]
In this example you cannot add directly to the list readOnlyBooks, however if you change books then it would also change readOnlyBooks.
Exceptions
An exception is an error that requires special handling. If the situation cannot be resolved the program ends abruptly. If you can handle the situation you have to catch the exception and solve the issue.
The Kotlin syntax for throwing and catching exceptions is the same as Java or most other languages.
Throw Expression
To throw an exception you use the expression
.
// to throw an exception throw Exception("Error!")
You can throw a generic exception, or create a custom exception that derives from another
class.
class CustomException(error: String) : Exception(error) throw CustomException("Error!")
In Kotlin throw is an expression, so it can return a value of type
. This is a special type with no values. It indicates code that will never be reached. You can also use this type directly, for example as return type in a function to indicate that it never returns.
fun errorMessage(message: String): Nothing { println(message) throw Exception(message) }
This type can also come up with type inference. The nullable variant of this type (i.e.,
) has one valid value:
. So, if you use null to initialize a variable whose type is inferred the compiler will infer the type
.
var something = null // something has type Nothing?
Try Expression
To catch an exception you need to wrap the
expression around the code that could launch the exception.
try { // code } catch (e: Exception) { // handling the exception } finally { // code that is always executed }
The finally block contains code that is always executed no matter what. It is useful to make some cleaning, like closing open files or freeing resources. You can use any number of
block and one
block, but there should be at least one of either blocks. So, there must be at least one between finally and catch blocks.
try { // code } catch (e: CustomException) { // handling the exception }
If you want to catch only an expection of a certain type you can set it as argument of catch. In this code we only catch exceptions of type
.
try { // code } catch (e: Exception) { // handling all exceptions }
If you want to catch all exceptions you can use the catch with the argument of type
, which is the base class of all exceptions. Alternatively you can use the finally block alone.
try { // code } finally { // code that is always executed }
Try is an expression, so it can return a result. The returned value is the last expression of the
or
blocks. The
block cannot return anything.
val result: String = try { getResult() } catch (e: Exception) { "" }
In this example we wrap the call to a function
in a try expression. If the function returns normally we initialize the value result with the value returned by it, otherwise we initialize it with an empty string.
A Simple Kotlin Program
Up until now we have seen the basics of Kotlin: how to define functions and classes, the control flow constructs available, etc. In this chapter we are going to put all of this together to create a simple Kotlin program. This program will convert a bunch of CSV files in a bunch of JSON files.
Setting Up
We do not need any external library in this project, so we are going to create a simple Kotlin project with the name
. We only need to create a Kotlin file inside the
folder and we are ready to work. We choose the name
for our file, but you can choose any name you want.
Inside this file we need to add an import for a Java module, since Kotlin reuse the standard Java library to access a file.
import java.io.File
Getting a List of CSV Files
Now that everything is ready, we can start working on the main function.
fun main(args: Array<String>) { // get a list of files in the input directory val files = File("./input").listFiles() // walk through the list of files for (file in files) { // analyze only the CSV files if (file.path.endsWith((".csv"))) { // get the content of the file divided by lines val input: List = File(file.path).readLines() // separate the header row from the rest of the content val lines = input.takeLast(input.count() - 1) val head: List = input.first().split(",") [..] } } }
This the first part of the function, where we collect the files in the
directory, filter only the ones that are CSV files and read the contents of each file. Once we have the lines of each CSV files we separate the first line, that contains the header, from the rest of the content. Then we get the names of the columns from the header line.
The code is quite easy to understand. The interesting part is that we can easily mix Java classes with normal Kotlin code. In fact, parts like the class
and the field
are defined elsewhere in Java, while the functions
and
are Kotlin code. You can check that with IntelliJ IDEA, by trying to look at the implementation code, just like we do in the following video.
You can access Kotlin code directly, by right-clicking on a piece of Kotlin code and going to the implementation voice in the menu. Instead since Java is distributed as compiled code, you can only see after it has been decompiled.
Given that the Java code is available only through the decompilation, the first time you try to access it you would see a warning like this one.
Convert CSV data to JSON
Once that we have got the content of the CSV file, we need to transform it in the corresponding JSON data.
The following code corresponds to the [..] part in the previous listing.
var text = StringBuilder("[") for (line in lines) { // get the individual CSV elements; it's not perfect, but it works val values = line.split(",") text.appendln("{") // walk through the elements of the CSV line for (i in 0 until values.count()) { // convert the element in the proper JSON string val element = getElement(values[i].trim()) // write the element to the buffer // pay attention to how we write head[i] text.append("t"${head[i]}": $element") // append a comma, except for the last element if(i != values.count() - 1) text.appendln(",") else text.appendln() } text.append("},") } // remove the last comma text.deleteCharAt(text.length-1) // close the JSON array text.appendln("]") val newFile = file.path.replace(".csv",".json") File(newFile).writeText(text.toString()) }
For each file we create a
variable to contain the text. The cycle to transform the format from CSV to JSON is simple:
- we loop through each line
- for each line, we create a list of elements by splitting the line for each comma we found
- we use each element of the list as a value of the JSON field, we pick as name the element of the header of the CSV file in the corresponding position
The rest of the code deals with ensuring to add the right delimiters for JSON and writing the new JSON file.
All that remains to see is the function
, that we use to convert the CSV element in the proper JSON version.
fun isNumeric(text: String): Boolean = try { text.toDouble() true } catch(e: NumberFormatException) { false } fun getElement(text: String) : String { when { // items to return as they are text == "true" || text == "false" || text == "null" || isNumeric(text) -> return text // strings must be returned between double quotes else -> return ""$text"" } }
We need to convert a CSV element in the corresponding JSON element: simple strings have to be put between double quotes, while numbers and special values (i.e., boolean constants and null) can be written as they are. To check whether an element is a number we create the function
.
To convert a string into a number there is no other way that trying to do that and catching the resulting exception, if the conversion fails. Since in Kotlin try is an expression, we can use the expression syntax for the function
. If the conversion succeeds, we know that the text is a number, so we return true otherwise we return false.
And that is pretty much our simple program.
We hope that you can see how clear and easy to use is Kotlin: it smooths the hard edges of Java and get a you a concise language that is fun to use.
Advanced Kotlin
Now we can move to the advanced parts of Kotlin, where we learn how to take advantage of its most powerful features. How to use functions at their fullest with higher-order functions and lambdas. We explain what are and how to use generic types and the powerful features around them available in Kotlin. Finally, we take a look at a few interesting niceties of Kotlin and how to create a real world Kotlin program.
Higher-order Functions
In Kotlin functions are first-class citizens: they can be stored in variables and passed around just like any other value. This makes possible to use higher-order functions: functions that accepts argument and/or return functions as a result. In particular, a lambda is a function literal: an anonymous function that is not declared but is used directly as an expression.
Basically, a lambda is a block of code that can be passed around just like any other literal (e.g., just a like a string literal
). The combination of these features allows Kotlin to support basic functional programming.
Function Types
The core of the functional support are function types: anonymous types that corresponds to the signature of a function (i.e., parameters and the return type). They can be used just like any other type.
Their syntax is a list of parameters between parentheses, followed by an arrow and the return type.
var funVar: (String) -> Unit
In this example the variable funVar can hold any function that has the corresponding signature. That is to say any function that accepts a
as argument and returns
(i.e., no value).
fun tell(text: String) { println(text) } fun main(args: Array<String>) { var funVar: (String) -> Unit funVar = ::tell }
For example, you could assign directly a function using a callable reference to that particular element. The syntax is
. In the previous example
is a top-level function so the class is absent. If you wanted to reference a functon like
of the
class, you would use
.
When you define a function type you always have to explicitly indicate the return type. When declaring normal functions that return
you can omit the return type, but not with function types. Also, you have to put the parentheses for the parameters, even when the function type does not accept any parameter.
// wisdom has no arguments and gives back nothing meaningful val wisdom: () -> Unit = { println("Life is short, but a string can be long") }
Of course, if the compiler can infer the type correctly, you can omit the type altogether. This is true for all types, even function types. So, we could have written the previous example even in this way, because the compiler can understand that the lambda has no parameter and returns nothing.
val wisdom = { println("Life is short, but a string can be long") }
Lambdas
You could also directly assing a lambda to funVar. The syntax of a lambda reflects the syntax of a function type, the difference is that you have to set the name of the arguments of the lambda and you do not need to set the return type.
var funVar: (String) -> Unit = { text: String -> println(text) }
In this example we put the equivalent of the function
as the body of the lambda. Whatever way you use to assign a function to the variable funVar, once you do that, you can use it just like any other normal function.
// it prints "Message" funVar("Message")
This code prints the message just like as if you called the function
directly.
You could also call a lambda directly, supplying the argument.
// it prints 15 println({ x: Int -> x * 3 }(5))
Here we directly call the lambda with argument
, so that the result of our lambda (
) is passed as argument to the function
.
Conventions
Lambda are so useful that Kotlin has a couple of interesting conventions to simplify their use.
The first one is the implicit parameter it.
If both these conditions are true:
- the compiler already knows the signature of the lambda, or can figure it out
- the lambda has only one argument
Then you can omit declaring the parameter of the lambda and use the implicit parameter it.
var simpleFun: (Int) -> Int = { it + 2 } // this is equivalent to the following declaration // var simpleFun: (Int) -> Int = {i: Int -> i + 2 } println(simpleFun(2))
Notice that you can also declare the parameter yourself explicitly. This is a better choice if you have nested lambda.
The second convention applies only if a lambda is passed as argument of the last parameter of a function. In such cases you can write the lambda outside the parentheses. If the lambda is the only argument of the function, you can omit the parentheses altogether.
Let’s start with a function.
fun double(number: Int = 1, calculation: (Int) -> Int) : Int { return calculation(number) * 2 }
The function double has as parameters an
and a function. The parameter
has a default value of
. The function has one parameter of type
and return a value of type
. The function double returns whatever is returned by the lambda calculation multiplied by 2.
val res_1 = double(5) { it * 10 } val res_2 = double { it * 2 } println(res_1) // it prints 100 println(res_2) // it prints 4
In the first case, we supply to double both an argument for number (i.e.,
) and a lambda for calculation. For the second one, we just supply a lambda because we take advantage of the default argument of number. In both cases we write the lambda outside the parentheses, in the second case we can omit the parentheses altogether.
Lambda and Collections
Lambdas are very useful with collections and in fact they are the backbone of the advanced manipulation of collections.
map and filter
The basic function to manipulate collection is
: this functions accepts as argument a lambda and returns a new collection. The lambda is given as argument an element of the collection and returns a
. If the lambda returns
for an element, that element is added to the new collection, otherwise is excluded.
val li = listOf(1, 2, 3, 4) // it prints [2,4] println(li.filter( i: Int -> i % 2 == 0 }))
In this example, the filter function returns all even numbers.
The function
creates a new collection created by applying the lambda supplied to map to each element of the collection.
val li = listOf(1, 2, 3, 4) // we use the it implicit parameter println(li.map { it * 2 }) // it prints [2, 4, 6, 8]
In this example, the map function doubles each element of the collection.
data class Number(val name: String, val value: Int) val li = listOf(Number("one", 1), Number("two", 2), Number("three", 3)) // it prints [one, two, three] println(li.map { it.name })
You are not forced to create a new collection of the same type of the original one. You can create a new collection of any type. In this example we create a list of
from a list of
.
find and groupBy
The function
returns the first element of the collection that satisfies the condition set in the lambda. It is the same function as
, which is a longer but also clearer name.
val list = listOf(1, 2, 3, 4) println(list.find({ it % 2 == 0 })) // it prints 2 val set = setOf("book", "very", "short") println(set.find {it.length < 4}) // it prints "null" println(set.find {it.length > 4}) // it prints "short"
In this example we use the find function on a list and a set. For the list, it returns only the first element that satisfy the condition, even though there is more than one. For the set, it returns null a first time, and the element that satisfies the condition, the second time.
Given that any elements after the first that satisfy the condition is ignored it is better to use as condition of find an element that identifies only one element. It is not necessary, but it is better for clarity. If you are just interested in any element that satisfies the condition it is more readable to use the function
directly.
val list = listOf(1, 2, 3, 4) // equivalent to the previous example, but clearer println(list.firstOrNull({ it % 2 == 0 })) // it prints 2
The function
allows to divide a collection in more groups according to the condition indicated.
data class Number(val name: String, val value: Int) val list = listOf(Number("one", 3), Number("two", 3), Number("three", 5)) // it prints // {3=[Number(name=one, value=3), Number(name=two, value=3)], // 5=[Number(name=three, value=5)]} println(list.groupBy { it.value }) val set = setOf(1, 2, 3, 4) // it prints // {false=[1, 3], true=[2, 4]} println(set.groupBy({ it % 2 == 0 }))
In the first example we use groupBy to group the elements of a list according to one property of the element. In the second one we divide the elements of a set depending on whether they are odd or even numbers.
The condition of groupBy can be any complex function. In theory you could use whatever condition you want. The following code is pointless, but valid code.
val rand = Random() val set = setOf(1, 2, 3, 4) // it may print {2=[1, 3, 4], 1=[2]} println(set.groupBy { rand.nextInt(2) + 1 })
fold
The method
collapses a collection to a unique value, using the provided lambda and a starting value.
val list = listOf(5, 10) // it prints 15 println(list.fold(0, { start, element -> start + element })) // it prints 0 println(list.fold(15, { start, element -> start - element }))
In the previous example, the first time we start with 0 and added all elements until the end. In the second case, with start with 15 and subtracted alle elements. Basically, the provided value is used as argument for start in the first cycle, then start becomes the value returned by the previous cycle.
So, in the first case the function behaves like this:
- start = 0, element = 5 -> result 5
- start = 5, element = 10 -> result 15
It is natural to use the function with numeric collections, but you are not restricted to using them.
val reading = setOf("a", "short", "book") // it prints "Siddharta is a short book" println(reading.fold("Siddharta is ", { start, element -> start + "$element "}))
flatten and flatMap
The function
creates one collection from a supplied list of collections.
val list = listOf(listOf(1,2), listOf(3,4)) // it prints [1,2,3,4] println(list.flatten())
Instead
use the provided lambda to map each element of the initial collection to a new collection, then it merges all the collections into one collection.
data class Number(val name: String, val value: Int) val list = listOf(Number("one", 3), Number("two", 3), Number("three", 5)) // it prints [3, 3, 5] println(list.flatMap { listOf(it.value) } )
In this example, we create one list with all the values of the property
of each element of type
. These are the steps to arrive to this result:
- each element is mapped to a new collection, with these three new lists
- listOf(3)
- listOf(3)
- listOf(5)
- then the these three lists are merged in one list, listOf(3,3,5)
Notice that the initial collection does not affect the kind of collection that is returned. That is to say, even if you start with a set you end up with a generic collection.
val set = setOf(Number("one", 3), Number("two", 3), Number("three", 5)) // it prints [3, 3, 5] println(set.flatMap { setOf(it.value) } )
Luckily this, and many other problems, can be easily solved thanks to the fact that you can concatenate lists and functions.
val set = setOf(Number("one", 3), Number("two", 3), Number("three", 5)) // it prints [3, 5] println(set.flatMap { setOf(it.value) }.toSet() )
You can combine the function we have just seen, and many others, as you wish.
val numbers = listOf("one", "two", "three", "four", "five") // it prints [4, 5] println(numbers.filter { it.length > 3 }.sortedBy{ it }.map { it.length }.toSet())
In this example:
- we filter all element which have length of at list 3
- then we sort the elements
- then we create a new collection by mapping each element to its length
- finally we create a set, which has only unique elements
Generic Types
Why We Need Generic Types
If you already know what generic types are, you can skip this introduction.
A language with static typing, like Kotlin, is safer to use than one with dynamic typing, such as JavaScript. That is because it eliminates a whole class of bugs related to getting the type of a variable wrong.
For example, in JavaScript you may think that a certain variable is a string, so you try to access its field length. However, it is actually a number, because you mixed the type of the element returned from a function. So, all you get is an exception, at runtime. With a language like Kotlin, these errors are caught at compilation time.
The downside is that development takes a bit longer and it is a bit harder. Imagine that you are trying to sum two elements. With a language with dynamic typing that is easy: just check if the two elements are numbers and then sum them.
With a language with static typing this is harder: you cannot sum elements of different types, even if you can do the sum, you do not really know what type the function returns. For example, if you sum two
the type returned must be
, but if the elements are
the type returned must be of that type.
On the other hand, these kinds of constraints are exactly the reason because static typing is safer to use, so we do not want to renounce to them.
There is a solution that can save both safety and power: generic types.
How to Define Generic Types in Kotlin
Generic types are types that can be specified when you create of an object, instead of when you define a class.
class Generic<T>(t: T) { var value = t } val generic = Generic<Int>(5) // generic types can be inferred, just like normal types val doubleGeneric = Generic(5.5)
When you use a generic types you are saying to the compiler that some variables (one or more) will be of a type that will be specified later. Even if you do not know which one exactly is yet, the compiler can perform the usual checks to ensure that the rules for type compatibility are respected. For example, if you sum two elements you can say that the elements wll be both of type
, so the type returned by the sum function will be also of type
.
You can also use them in functions, you have to put the generic type name (es.
) after the keyword
and before the name of the function.
fun <T> genericFunction(obj: T): List<T> { return listOf(obj) } val item = genericFunction("text")
Constraining a Generic Type
These are the basics, but Kotlin does not stop there. It has a quite sophisticated support for defining constraints and conditions on generic types. The end results is that generic types are powerful but also complex.
You can limit a generic type to a specific class or one of its subclasses. You can do it just by specifying the class after the name of the generic type.
// the type Number is predefined by Kotlin fun <T : Number> double(value: T): Double { // accepts T of type Number or its subclasses return value.toDouble() * 2.0 } // T is of type double val num_1 : Double = double(5.5) // T is of type int val num_2 : Double = double(5) // T is of type float val num_3 : Double = double(5.5f) // it does not compile val error = double("Nope")
In this example, the
values are all of type
, even though they functions accepted
of different types. The last line does not compile because it contains an error:
is not a subclass of
.
Variance
The concept of variance refers to the relation between generic types with argument types that are related. For instance, given that
is a subclass of
, is
a subclass of
? At first glance you may think that the answer should be obvious, but this is not the case. Let’s see why.
Imagine that we have a function that read elements from an immutable List.
<code>fun read(list: List<Number>) { println(list.last()) }
What will happen if we tried using it with lists with different type arguments?
val doubles: MutableList<Double> = mutableListOf(5.5, 4.2) val ints: MutableList<Int> = mutableListOf(5, 4) val numbers: MutableList<Number> = mutableListOf(3, 2.3) read(doubles) // it prints 4.2 read(ints) // it prints 4 read(numbers) // it prints 2.3
There is no issue at all, everything works fine.
However, things change if we have a function that adds elements to a list.
fun add(list: MutableList<number>) { list.add(33) }
What will happen if we tried using it with lists with different type arguments? The compiler will stop us most of the times.
val doubles: MutableList<Double> = mutableListOf(5.5, 4.2) val ints: MutableList<Int> = mutableListOf(5, 4) val numbers: MutableList<Number> = mutableListOf(3, 2.3) add(doubles) // this is an error add(ints) // this is an error add(numbers) // this works fine
We cannot safely add elements to a list which has a type argument of a subtype because we do not know the actual type of the elements of the list. In short, the issue is that we could have a list of
and we cannot add
to this list, or vice versa. That is because this would break type safety and we would have a list with argument of types different from the one we expect.
So, it is generally safe to read elements from a List with elements of a subtype, but not add to it. This kind of complex situations can rise with all generic classes. Let’s see what can happen.
Covariance
A generic class is covariant if it is a generic class for which it is true that if A is a subclass of B, then Generic<A> is a subclass of Generic<B>. That is to say the subtype is preserved.
For example, if a List of Int is a subtype of a List of Number then List is covariant.
To define a covariant class you use the modifier
before the name of the generic type.
class Covariant<out T> { fun create() : T }
Declaring a class as covariant allows to pass to a function arguments of a certain generic type, or any compatible subtype, and to return argument of a compatible subytpe.
open class Animal // T is covariant and also constrained to be a subtype of Animal class Group<out T : Animal> { /* .. */} fun feed(animals: Group<Animal>) { // <- notice that is Group<Animal> /* .. */ } class Cat : Animal() { /* .. */ } fun feedCats(cats: Group<Cat>) { // <- it is Group<Cat> feed(cats) // <- if T wasn't covariant this call would be invalid // that's because Group<Cat> would not be a subtype of Group<Animal> }
This example contains three classes:
and
are normal classes, while
is a generic class. The covariance is declared for
and that is what allows its use in functions that use this type.
We can pass an object of type
to the function
only because we have declared T to be covariant. If you omitted the out modifier you would receive an error message about an incompatible type:
Type mismatch: inferred type is Group<Cat> but Group<Animal> was expected
Covariance Is Not Free
Having covariance is useful, but it is not a free lunch. It is a promise that you make that variables of the generic type will be used only in certain ways that guarantees to respect covariance.
In practical terms it means that if you want to declare a type parameter
as covariant you can only use it in out position. So, all the methods that use it, inside the generic class, can only produce elements of that type and not consume it. Of course, this restriction only applies to methods inside that specific generic class. You can use the whole generic class normally as argument of other functions.
For example, the following is valid code.
class Group<out T : Animal> { fun buy() : T { return Animal() as T } }
However, this is not.
class Group<out T : Animal> { // error: you cannot do it if T is covariant fun sell(animal: T) { /* .. */ } }
If you try to use a type declared as coviarant you will see this error message:
Type parameter T is declared as ‘out’ but occurs in ‘in’ position in type T
Contravariance
A generic class is contravariant if it is a generic class for which it is true that if A is a subclass of B, then Generic<B> is a subclass of Generic<A>. That is to say the subtype is reversed.
This image shows the subtype relations between different classes.
To define a contravariant class you use the modifier
before the name of the generic type.
class Contravariant<in T> { fun read(e: T) }
As you can imagine, to declare a type parameter as contravariant you need to respect the opposite restriction than the one for a covariant class. You can only use the type in a in position for the methods of the class.
Let’s see an example of a contravariant class:
open class Animal class Dog : Animal() // T is contravariant and constrained to be a subclass of Animal class Pack<in T : Animal> { fun sell(animal: T) { // <- notice that is in position /* .. */ } } fun sellDog(dog: Dog) : Pack<Dog> { return Pack<Animal>() // <- if T wasn't contravariant this call would be invalid // that's because Pack<Animal> would not be a subtype of Pack<Dog> }
This example contains three classes:
and
are normal classes, while
is a generic class. The contravariance is declared for
and that is what allows its use in functions that use this type.
We can return object of type
in the function
only because we have declared T to be contravariant. If you omitted the
modifier you would receive an error message about an incompatible type:
Type mismatch: inferred type is Pack<Animal> but Pack<Dog> was expected
A Few Niceties
In this chapter we cover a few nice features of Kotlin.
Extension Methods
You can define methods that seem to extend existing classes.
fun String.bePolite() : String { return "${this}, please" } var request = "Pass the salt" // it prints "Pass the salt, please" println(request.bePolite())
However, this is just syntactic sugar: these methods do not modify the original class and cannot access private members.
Alternatives to Static
Kotlin has much to offer, but it lacks one thing: the
keyword. In Java and other languages is used for a few different reasons, each of these has a Kotlin alternative:
- to create utility functions
- Kotlin has extensions methods, that allows to easily extend a class
- Kotlin allows to use first-level functions, outside of a class
- global fields or methods for all objects of a class
- Kotlin has the keywordobject
We have already seen the first two solutions, so let’s how the keyword object solve the need for static fields for a class.
A Singleton is Better than Static
Kotlin allows to define an
simply by using the keyword object instead of
.
object Numbers { var allNumbers = mutableListOf(1,2,3,4) fun sumNumbers() : Int { /* .. */ } fun addNumber(number: Int) { /* .. */ } }
In practical terms, using
is equivalent to using the Singleton pattern: there is only one instance of the class. In this example this means that Numbers can be used as an instance of the class
.
fun main(args: Array<String>) { println(Numbers.sumNumbers()) // it prints 10 Numbers.addNumber(5) println(Numbers.sumNumbers()) // it prints 15 }
If you need something to store information about the relationship between different instances of a class or to access the private members of all instances of a class you can use a
.
class User private constructor(val name: String) { // the constructor of the class is private companion object { // but the companion object can access it fun newUser(nickname: String) = User(nickname) } } // you can access the companion object this way val mark = User.newUser("Mark")
Companion objects are ideals for the factory method pattern or as alternative to static fields.
Infix Notation
Using the
keyword on the declaration of a function you can use it with infix notation: there is no need to use parentheses for the parameter. This notation can only be used with functions that accept one argument.
infix fun String.screams(text: String): String { return "$this says aloud $text" } val mike = "Michael" val strongHello = mike screams "hello" // it prints "Michael says aloud hello" println(strongHello)
This notation makes the code more readable and gives the impression that you can create custom keywords. It is the reason because you can use ranges like the following one, using
:
for (e in 1 until 3) print("$e ")
It is also a nice example of the power of the language itself. Kotlin is quite light as a language and a lot of its features are in the standard library, which means that even you can create powerful and elegant code with ease.
Destructuring Declarations
Destructuring declarations are a feature of Kotlin that allows to decompose an object in its constituent parts
val (x, y) = a_point
The variables defined in this way (x and y in this example) are normal variables. The magic is in the class of the object (a_point in this example).
It seems that it is possible to return more than one result from a function, but this is not true. This is just syntactic sugar. The compiler transforms the previous call in the following code.
val x = a_point.component1() val y = a_point.component2()
This is another example of the power of intelligent conventions in Kotlin. To make destructuring declarations work for your classes you need to define componentN functions preceded by the keyword
.
class point(val x: Int, val y: Int) { operator fun component1() = x operator fun component2() = y }
They are quite useful with
collections, for which they are already defined the proper componentN functions.
for ((key, value) in a_map) { // .. }
They can be used with all collections, to get their elements.
val (a, b, c) = listOf(1, 2, 3) println("a=$a, b=$b, c=$c")
If you are not interested in a certain element, you can use _ to ignore it.
val (_, y) = a_point
The necessary componentN functions are defined automatically for data classes. This favors the use of a common Kotlin pattern: defining a data class to be used to return values from a function.
For example, you can create a data class that contains both the status of the operation (i.e., success/failure) and the result of the operation.
data class Result(val result: Int, val status: Status) fun operation(): Result { /* .. */ return Result (result, status) } // now you can use it like this val (result, status) = operation()
Using Kotlin for a Real Project
We have seen a complete picture of Kotlin: everything from basics to understanding lambdas. In this section we are going to put all of this together to create a simple, but realistic Kotlin program. This program has a graphical UI that allows the user to calculate some metrics of a text: its readability and the time it takes to read it.
Setting Up
We are going to need an external library for the UI of this project, so we are going to setup a Gradle project. Once you have followed the instructions you will have a project that will look like the following. We named our project
and created thre Kotlin files:
- AnalysisApp
which contains the UI
- Program
for the main application code
- TextMetrics
for the library methods to calculate the metrics
The first thing that you have to do is open the
file and add the TornadoFX dependency. This is a Kotlin library that simplifies using the JavaFX framework, the default framework to create UI for desktop apps.
[..] repositories { mavenCentral() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version" compile "no.tornado:tornadofx:1.7.15" } compileKotlin { kotlinOptions.jvmTarget = "1.8" } compileTestKotlin { kotlinOptions.jvmTarget = "1.8" }
Most of the text will be already there, we just added the depedency TornadoFX inside the
block.
Once you have added the library you should build the gradle, by right clicking on the file
. If everything works correctly you will see the TornadoFX library in your project inside the External Libraries section.
Calculating the Text Metrics
Now that everything is ready, let’s see the
file, which contains the code that calculate metrics.
The file contains an
with two public functions: one to calculate the time needed to read the text, the other one to calculate how hard it is to read the text. We create an object instead of a class, because the metrics are independents and there is no need to store information about the text.
Time to Read a Text
object TextMetrics { // for the theory behind this calculation // see fun timeToRead(text: String) : Double = text.count { it.isLetterOrDigit() }.toDouble() / 987
To calculate the time needed to read the text we simply count the numbers of meaningful characters (i.e., we exclude punctuation and whitespace) and divide the result by
. The number comes from a research that studided this method to calculate the time needed to read a text. The number is valid for texts written in the English language.
This code is very concise thank to two Kotlin features: the expression syntax to define function and the simplified syntax for passing lambdas. The function
accepts a lambda that is applied to each character of the string, so all we need to do is to put a check that matches only meaningful characters.
Readability of a Text
// Coleman–Liau index fun readability(text: String) : Double { val words = calculateWords(text).toDouble() val sentences = calculateSentences(text).toDouble() val letters = text.count { it.isLetterOrDigit() }.toDouble() // average number of letters per 100 words val l = letters / words * 100 // average number of sentences per 100 words val s = sentences / words * 100 val grade = 0.0588 * l - 0.296 * s - 15.8 return if(grade > 0) grade else 0.0 }
To calculate the difficulty of the text we use the Coleman-Liau index, one of the readability measurements out there. We choose this test because it works on individual letters and words, which are easy to calculate. Some other tests instead use syllables or rely on a database of simple words which are harder to calculate.
Basically, this test looks up how long are the words and how long are the sentences. The longer the sentences are and the longer the words are the harder is the text.
This test outputs a number that corresponds to the years of schooling necessary to understand the text. This test works only for documents written in the English Language.
The code itself is easy to understand, the only thing we need to ensure is that the grade returned is higher than 0. It could be less if the text is particularly short.
Calculating the Number of Senteces
private fun calculateSentences(text: String) : Int { var index = 0 var sentences = 0 while(index < text.length) { // we find the next full stop index = text.indexOf('.', index) // if there are no periods, we end the cycle if (index == -1) index = text.length when { // if we have reached the end, we add a sentence // this ensures that there is at least 1 sentence index + 1 >= text.length -> sentences++ // we need to check that we are not at the end of the text index + 1 < text.length // and that the period is not part of an acronym (e.g. S.M.A.R.T.) && index > 2 && !text[index - 2].isWhitespace() && text[index - 2] != '.' // and that after the period there is a space // (i.e., it is not a number, like 4.5) && text[index + 1].isWhitespace() -> sentences++ } index++ } return sentences }
Calculating the number of sentences, in English, it is not hard, but requires a bit of attention. Basically, we need to find all periods and check that they are not part of either an acronym or a number with a fractional part. Since each text contain at least one sentence, we automatically add one when we reach the end of the text given as input.
Calculating the Number of Words
private fun calculateWords(text:String) : Int { var words = 1 var index = 1 while(index < text.length) { if(text[index].isWhitespace()) { words++ while(index + 1 < text.length && text[index + 1].isWhitespace()) index++ } index++ } return words } } // end of the object TextMetrics
To calculate the numbers of words is even simpler: we just need to find the whitespace and count it. The only thing that we have to check is to not count a series of spaces as more than one word. To avoid this error, when we find a space we keep advancing until we find the next not-space character.
The Graphical Interface
The library that we use for the graphical interface is TornadoFX. This library uses the MVC pattern: the model stores the business logic; the view takes care of showing the information; the controller glue the two of them and ensure that everything works correctly.
All the code for the Tornado application is inside the file
.
The Controller
Let’s start with seeing the controller.
import javafx.geometry.Pos import tornadofx.* import javafx.scene.text.Font class MainController(): Controller() { fun getReadability(text: String) = when(TextMetrics.readability((text))) { in 0..6 -> "Easy" in 7..12 -> "Medium" else -> "Hard" } fun getTimeToRead(text: String): Int { val minutes = TextMetrics.timeToRead(text).toInt() return if (minutes > 0) minutes else 1 } }
The controller provides two functions that we use to convert the raw information provided by the TextMetrics object in a more readable form. For the readability, we translate the number relative to a grade in a simpler textual scale. This is necessary because unless you are still in school probably you do not remember what grades means. Do you remember to which grade corresponds grade 10? So, we create a simple conversion:
- if it is less than high school it is easy
- high school is medium
- everything post high-school is hard
We also simplify the time to read seen by the user: we round up the time to the nearest minute. That’s because the calculation cannot really be that precise. The scientific research behind the calculation does not really allows such granularity. Furthermore, there are factors beyond our control that could skew the number. For instance, the real time depend on the actual reading speed of the user. So, if we gave a precise number it could be misleading. The round up number instead is generally correct, or at the very least better represents the imprecise nature of the calculation.
The View
In the same file we put the view.
class MainView: View() { val controller: MainController by inject() var timeToRead = text("") var readability = text("") var textarea = textarea("")
This first part is interesting for one reason: the way we initialize the property controller. We do it with the delegation pattern, using the kewyord
followed by a delegate class. A delegate class is a class that follows a specific format and can be used when you need to perform complex operations to initialize a property. In this case, the
class is provided by TornadoFX and it finds (or creates) for you an instance of the class specified.
This pattern is also useful for lazy initialization: imagine that you have to initialize a property, but initialization is costly or is dependent on something else. Traditionally you would have to initialize the property to null and then set the proper value later. Instead with Kotlin you can use the standard delegate
. This delegate accepts a lambda: the first time you access the property the lambda is executed and the value returned is stored, the next time it will simply return the stored value.
The rest of the code contains properties to store the elements of the UI that we are going to see now.
The Root Element
override val root = vbox { prefWidth = 600.0 prefHeight = 480.0 alignment = Pos.CENTER text("Text Analysis") { font = Font(28.0) vboxConstraints { margin = insets(20.0) } } textarea = textarea("Write your text here") { selectAll() vboxConstraints { margin = insets(20.0) } } textarea.isWrapText = true hbox { vboxConstraints { alignment = Pos.BASELINE_CENTER marginBottom = 20.0 } label("Time to Read") { hboxConstraints { marginLeftRight(20.0) } } timeToRead = text("No text submitted") label("Readability") { hboxConstraints { marginLeftRight(20.0) } } readability = text("No text submitted") }
The root property is a requirement for a TornadoFX app, it contains the content of the view. In our program we assign its value using a Type-Safe Builder. It is a feature of Kotlin that allows to create easily with a beautiful DSL-like interface things like UI or data formats files. Basicaly anything that has a complex hierarchical structure. Everybody can create a type-safe builder in Kotlin, but they are a bit complex to design, so we did not have the chance to see before. However, as you can see they are very easy to use. In this case, we use the one provided by TornadoFX to create the UI of our app.
Without a type-safe builder you would be forced to use configuration files or an awkward series of function calls. Instead with a type-safe builder is you can create quickly what you need, and the end result is easy to understand at a glance.
Our view consists of a:
- a vertical box (the initialvbox
) the contains
- a title (the firsttext
)
- a box that will contain the text inputted by the user (textarea
), which is also saved in the property textarea
- a horizontal box (hbox
) that contains
- two pairs of a label and a simple text. The two texts are stored in the properties readability and timeToRead
The code itself is quite easy (thanks to lambdas and type-safe builders), there are only a few terms to understand.
A vertical box stacks the elements vertically, while an horizontal box stacks them horizontally. The
and
contains restrictions on the layouts of the corresponding elements.
The
functions ensures that the default text (i.e.,
) is pre-selected. This allow the user to easily delete it with one click or press of the delete button.
The only element that remains is the button that is used by the user to start the analysis of the text. The following code is still inside the initial vbox we have just seen, that is assigned to the property root.
button("Analyze Text") { action { if(textarea.text.isNotEmpty()) { readability.text = controller.getReadability(textarea.text) timeToRead.text = "${controller.getTimeToRead(textarea.text)} minutes" } } } } <-- vbox ends here }
The button definition contains an action, a lambda, that is executed when the user clicks the button itself. The action gathers the text in the textarea and calls the functions in the controller, then it assigns the results to the proper properties (i.e., the
elements inside the
). If there is no text the action does not change anything.
The Main Program
The main program is contained inside the
file.
import javafx.application.Application import tornadofx.App class AnalysisApp: App(MainView::class) fun main(args: Array<string>) { Application.launch(AnalysisApp::class.java, *args) }
The file is very short because we just need to do two things:
- to create our app class, assigning to it the view
- launch the TornadoFX application with our app class
The end result is a nice graphical application.
Summary
We have learned a lot today, everything you need to know to use Kotlin in real projects. From the basics needed to define variables and functions to the more advanced features, like lambdas.
We have seen the Kotlin way, safety, conciseness and ease of use, that permeates all the language: from strings that supports interpolation to the great attention to the issues of nullability.
There is a lot of learn about Kotlin. The next steps are keeping reading this website:
- continue learning how to use Kotlin in the browser or to create native applications
- learn how to use coroutines, a Kotlin features to simplify working with asynchronous programming
- understanding how to use the Javalin web framework with Kotlin
- continue with 100+ Resources To Learn Kotlin The Right Way
And whenever you get a bit lost in Kotlin, you can find your way looking at the official reference.
The companion repository for this article is available on GitHub
The post Kotlin Mega Tutorial appeared first on SuperKotlin.
|
https://kotlined.com/blog/category/kotlin-news/expressions/
|
CC-MAIN-2021-31
|
refinedweb
| 16,067
| 62.17
|
:When a forward commit block is actually written it contains a sequence :number and a hash of its transaction in order to know whether the :... :Note: I am well aware that a debate will ensue about whether there is :any such thing as "acceptable risk" in relying on a hash to know if a :commit has completed. This occurred in the case of Graydon Hoare's :Monotone version control system and continues to this day, but the fact :is, the cool modern version control systems such as Git and Mercurial :now rely very successfully on such hashes. Nonetheless, the debate :will keep going, possibly as FUD from parties who just plain want to :use some other filesystem for their own reasons. To quell that :definitively I need a mount option that avoids all such commit risk, :perhaps by providing modest sized journal areas salted throughout the :volume whose sole purpose is to record log commit blocks, which then :are not forward. Only slightly less efficient than forward logging :and better than journalling, which has to seek far away to the journal :and has to provide journal space for the biggest possible journal :transaction as opposed to the most commit blocks needed for the largest :possible VFS transaction (probably one).. :Actually, the btree node images are kept fully up to date in the page :cache which is the only way the high level filesystem code accesses :them. They do not reflect exactly what is on the disk, but they do :reflect exactly what would be on disk if all the logs were fully :rolled up ("replay").. :A forward log that carries the edits to some dirty cache block pins :that dirty block in memory and must be rolled up into a physical log :before the cache block can be flushed to disk. Fortunately, such a :rollup requires only a predictable amount of memory: space to load :enough of the free tree to allocate space for the rollup log, enough :... Ok, here I spent about 30 minutes constructing a followup but then you answered some of the points later on, so what I am going to do is roll it up into a followup on one of your later points :-) :One traditional nasty case that becomes really nice with logical :forward logging is truncate of a gigantic file. We just need to :commit a logical update like ['resize', inum, 0] then the inode data :truncate can proceed as convenient. Another is orphan inode handling :where an open file has been completely unlinked, in which case we :log the logical change ['free', inum] then proceed with the actual :delete when the file is closed or when the log is replayed after a :surprise reboot. :... :Logical log replay is not idempotent, so special care has to be taken :on replay to ensure that specified changes have not already been Wait, it isn't? I thought it was. I think it has to be because the related physical B-Tree modifications required can be unbounded, and because physical B-Tree modifications are occuring in parallel the related physical operations cause the logical operations to become bound together, meaning the logical ops *cannot* be independantly backed out. Second, the logical log entry for "rm a/b/c" cannot be destroyed (due to log cycling based on available free space) until after the related physical operation has completed, which could occur seconds to minutes later.). :... :ugly or unreliable when there are lot of stacked changes. Instead I :introduce the rule that a logical change can only be applied to a known :good version of the target object, which promise is fullfilled via the :physical logging layer. :... :then apply the logical changes to it there. Where interdependencies :exist between updates, for example the free tree should be updated to :reflect a block freed by merging two btree nodes, the entire collection :of logical and physical changes has to be replayed in topologically :sorted order, the details of which I have not thought much about other :than to notice it is always possible.. :When replay is completed, we have a number of dirty cache blocks which :are identical to the unflushed cache blocks at the time of a crash, :and we have not yet flushed any of those to disk. (I suppose this gets :interesting and probably does need some paranoid flushing logic in :replay to handle the bizarre case where a user replays on a smaller :memory configuration than they crashed on.) The thing is, replay :returns the filesystem to the logical state it was in when the crash :happened. This is a detail that journalling filesystem authors tend :to overlook: actually flushing out the result of the replay is :pointless and only obscures the essential logic. Think about a crash :during replay, what good has the flush done? do not see why this example cannot be logically logged in pieces: : : ['new', inum_a, mode etc] ['link', inum_parent, inum_a, "a"] : ['new', inum_b, mode etc] ['link' inum_a, inum_b, "b"] : ['new', inum_c, mode etc] ['link' inum_a_b, inum_c, "c"] : ['new', inum_x, mode etc] ['link' inum_a_b, inum_x, "x"] : ['new', inum_d, mode etc] ['link' inum_a_b_c, inum_d, "d"] : :Logical updates on one line are in the same logical commit. Logical :allocations of blocks to record the possibly split btree leaves and new :allocations omitted for clarity. The omitted logical updates are :bounded by the depth of the btrees. To keep things simple, the logical :log format should be such that it is impossible to overflow one commit :block with the updates required to represent a single vfs level :transaction.. I kinda mix them up. I think of them as transactions at different levels of granularity. :... :the resulting blocks are logged logically, but linked into the parent :btree block using a logical update stored in the commit block of the :physical log transaction. How cool is that? That is definitely cool. :transactions are not just throwaway things, they are the actual new :data. Only the commit block is discarded, which I suppose will leave :a lot of one block holes around the volume, but then I do not have to :require that the commit block be immediately adjacent to the body of :the transaction, which will allow me to get good value out of such :holes. On modern rotating media, strictly linear transfers are not :that much more efficient than several discontiguous transfers that all :land relatively close to each other. True enough. Those holes will create significant fragmentation once you cycle through available space on the media, though. :> So your crash recovery code will have to handle=20 :> both meta-data undo and completed and partially completed transaction= :s. : ) (NOTE: I do realize that the REDO log can be compressed just as the UNDO one, by recording actual data shifts from B-Tree insertions and deletions, it gets really complex when you do that, though, and would not be idempot. :Yes. Each new commit records the sequence number of the oldest commit :that should be replayed. So the train lays down track in front of :itself and pulls it up again when the caboose passes. For now, there :is just one linear sequence of commits, though I could see elaborating :that to support efficient clustering. Yah. I see. Noting that issue I brought up earlier about the "rm a/b/c". Locking the caboose of your log until all related physical operations have been completed could create a problem. :One messy detail: each forward log transaction is written into free :space wherever physically convenient, but we need to be sure that that :free space is not allocated for data until log rollup has proceeded :past that transaction. One way to do this is to make a special check :against the list of log transactions in flight at the point where :extent allocation thinks it has discovered a suitable free block, which :is the way ddsnap currently implements the idea. I am not sure whether :I am going to stick with that method for Tux3 or just update the disk :image of the free tree to include the log transaction blocks and :somehow avoid logging those particular free tree changes to disk. Hmm, :a choice of two ugly but workable methods, but thankfully neither :affects the disk image. also noticed that very large files also wind up with multiple versi= :ons :> of the inode, such as when writing out a terrabyte-sized file.=20 : :Right, when writing the file takes longer than the snapshot interval. Yah. It isn't as big a deal when explicit snapshots are taken. It is an issue for HAMMER because the history is more of a continuum. Effectively the system syncer gives us a 30-60 second granular snapshot without having to lift a finger (which incidently also makes having an 'undo' utility feasible). :So the only real proliferation is the size/mtime attributes, which gets :back to what I was thinking about providing quick access for the :"current" version (whatever that means). st_size is the one thing we can't get away from. For HAMMER, mtime updates don't roll new inodes and both mtime and atime are locked to the ctime when accessed via a snapshot (so one can use a (tar | md5) as a means of validating the integrity of the snapshot). . It might complicate mirroring / clustering-style operations, but you may not be targetting that sort of thing. HAMMER is intended to become a cluster filesystem so I use 64 bit inode numbers which are never reused for the entire life of the filesystem. :Ext2 dirents are 8 bytes + name + round up to 4 bytes, very tough to :beat that compactness. We have learned through bitter experience that :anything other than an Ext2/UFS style physically stable block of :dirents makes it difficult to support NFS telldir cookies accurately :because NFS vs gives us only a 31 bit cookie to work with, and that is :not enough to store a cursor for, say, a hash order directory :traversal. This is the main reason that I have decided to go back to :basics for the Tux3 directory format, PHTree, and make it physically :stable. The cookies are 64 bits in DragonFly. I'm not sure why Linux would still be using 32 bit cookies, file offsets are 64 bits so you should be able to use 64 bit cookies. For NFS in DragonFly I use a 64 bit cookie where 32 bits is a hash key and 32 bits is an iterator to deal with hash collisions. Poof, problem solved. :In the PHTree directory format lookups are also trivial: the directory :btree is keyed by a hash of the name, then each dirent block (typically :one) that has a name with that hash is searched linearly. Dirent block :pointer/hash pairs are at the btree leaves. A one million entry :directory has about 5,000 dirent blocks referenced by about 1000 btree :leaf blocks, in turn referenced by three btree index blocks (branching :factor of 511 and 75% fullness). These blocks all tend to end up in :the page cache for the directory file, so searching seldom references :the file index btree.. :will pass Hammer in lookup speed for some size of directory because of :the higher btree fanout. PHTree starts from way behind courtesy of the :32 cache lines that have to be hit on average for the linear search, :amounting to more than half the CPU cost of performing a lookup in a :million element directory, so the crossover point is somewhere up in :the millions of entries. Well, the B-Tree fanout isn't actually that big a deal. Remember HAMMER reblocks B-Tree nodes along with everything else. B-Tree nodes in a traversal are going to wind up in the same 16K filesystem block.. :.. :>=20 :> Beyond that the B-Tree is organized by inode number and file offset. :> In the case of a directory inode, the 'offset' is the hash key, so :> directory entries are organized by hash key (part of the hash key is :> an iterator to deal with namespace collisions). :>=20 :> The structure seems to work well for both large and small files, for :> ls -lR (stat()ing everything in sight) style traversals as well as=20 :> tar-like traversals where the file contents for each file is read. : . Yes, my thoughts exactly. Using a fairly large but extremely flexible B-Tree element chopped the code complexity down by a factor of 4 at least. If I hadn't done that HAMMER would have taken another year to finish. I plan on expanding efficiency mainly by making the data references fully extent-based, with each B-Tree element referencing an extent. In fact, read()s can already handle arbitrary extents but write clustering and historical access issues forced me to use only two block sizes for writes (16K and 64K). A HAMMER B-Tree element is theoretically capable of accessing up to 2G of linear storage. :... :>. : :This is where I do not quite follow you. File contents are never :really deleted in subsequent versions, they are just replaced. To :truncate a file, just add or update size attribute for the target :version and recover any data blocks past the truncate point that :belongs only to the target version.. Hmm. I think the delete_tid is less complex. I'm not really concerned about saving 8 bytes in the B-Tree element. I don't want to make it bigger then the 64 bytes it already is, but neither am I going to add whole new subsystems just to get rid of 8 bytes. :I plan to scan the whole btree initially, which is what we do in ddsnap :and it works fine. But by looking at the mtime attributes in the inode :I can see in which versions the file data was changed and thus not have :to scan most files. Eventually, building in some accelerator to be able :to skip most inode leaves as well would be a nice thing to do. I will :think about that in the background.. :> You may want to consider something similar. I think using the :> forward-log to optimize incremental mirroring operations is also fine :> as long as you are willing to take the 'hit' of having to scan (though :... : . Yes, I understand now that you've expanded on that topic. :>. : :Things are a little different in Linux. The VFS takes care of most of :what you describe as your filesystem front end, and in fact, the VFS is :capable of running as a filesystem entirely on its own, just by :supplying a few stub methods to bypass the backing store (see ramfs). :I think this is pretty much what you call your front end, though I :probably missed some major functionality you provide there that the :VFS does not directly provide.. In particular, caching a namespace deletion (rm, rename, etc) without touching the meta-data requires implementing a merged lookup so you can cancel-out the directory entry that still exists on-media. So it isn't entirely trivial.). Most filesystems will dirty meta-data buffers related to the media storage as part of executing an operation on the frontend. I don't know of any which have the level of separation that HAMMER has. :Note: no other filesystem on Linux current works this way. They all :pretty much rely on the block IO library which implements the fairly :goofy one-block-at-a-time transfer regimen. The idea of setting up bio :transfers directly from the filesystem is unfortunately something new. :We should have been doing this for a long time, because the interface :is quite elegant, and actually, that is just what the block IO library :is doing many levels down below. It is time to strip away some levels :and just do what needs to be done in the way it ought to be done. Jeeze, BSD's have been doing that forever. That's what VOP_BMAP is used for. I'm a little surprised that Linux doesn't do that yet. I'll expand on that down further. :... :> the media. The UNDO *is* flushed to the media, so the flush groups :> can build a lot of UNDO up and flush it as they go if necessary. : :Hmm, and I swear I did not write the above before reading this paragraph. :Many identical ideas there. Yah, I'm editing my top-side responses as I continue to read down. But I'm gleaning a lot of great information from this conversation. :> think you will like it even more when you realize that updating the :filesystem header is not required for most transactions. A definite advantage for a quick write()/fsync() like a database might do. I might be able to implement a forward-log for certain logical operations. Due to my use of a fixed UNDO FIFO area I can pre-format it and I can put a 64 bit sequence number at the head of each block. There would be no chance of it hitting a conflict. Hmm. Yes, I do think I could get rid of the volume header update without too much effort. el= :ements[] }; :... :See, I am really obsessive when it comes to saving bits. None of these :compression hacks costs a lot of code or has a big chance of hiding a :bug, because the extra boundary conditions are strictly local. Man, those are insanely small structures. Watch out for the cpu and in-kernel-memory tracking overhead. : * :-) The typical BSD (Open, Net, Free, DragonFly, etc) buffer cache structure is a logically indexed entity which can also contain a cached physical translation (which is how the direct data bypass works). :... :> Are you just going to leave them in the log and not push a B-Tree :> for them?=20 : . :> :I might eventually add some explicit cursor caching, but various :> :artists over the years have noticed that it does not make as much :> :difference as you might think. :>=20 :> For uncacheable data sets the cpu overhead is almost irrelevant. : :For rotating media, true. There is also flash, and Ramback... Yes, true enough. Still only a 20% cache hit though, due to the many other in-memory structures involved (namecache, vnode, in-memory inode, etc). :> I think it will be an issue for people trying to port HAMMER. I'm tr= :ying :> to think of ways to deal with it. Anyone doing an initial port can=20 :> just drop all the blocks down to 16K, removing the problem but :> increasing the overhead when working with large files. : :Do the variable sized page hack :-] Noooo! My head aches. :Or alternatively look at the XFS buffer cache shim layer, which :Christoph manhandled into kernel after years of XFS not being accepted :into mainline because of it. The nice advantage of controlling the OS code is I could change DragonFly's kernel to handle mixed block sizes in its buffer cache. But I still want to find a less invasive way in the HAMMER code to make porting to other OS's easier. . That is reasonable. I don't plan on going past about a megabyte per extent myself, there's just no reason to go much bigger. :> Orphan inodes in HAMMER will always be committed to disk with a :> 0 link count. The pruner deals with them after a crash. Orphan :> inodes can also be commited due to directory entry dependancies. : :That is nice, but since I want to run without a pruner I had better :stick with the logical logging plan. The link count of the inode will :also be committed as zero, or rather, the link attribute will be :entirely missing at that point, which allows for a cross check.. GEOM is fairly simplistic compared to LVM, but I'll keep your comments regarding LVM in mind (never having used it). GEOM's claim to fame is its encryption layer. Maybe the ticket for DragonFly is to simply break storage down into a reasonable number of pieces, like cutting up a drive into 256 pieces, and create a layer to move and glue those pieces together into larger logical entities. :> BTW it took all day to write this! : :It took two days to write this ;-) : :I hope the fanout effect does not mean the next post will take four days. : :Regards, : :Daniel I'm editing down as much as I can :-) This one took 6 hours, time to get lunch! -Matt Matthew Dillon <dillon@backplane.com>
|
http://leaf.dragonflybsd.org/mailarchive/kernel/2008-07/msg00136.html
|
CC-MAIN-2014-15
|
refinedweb
| 3,408
| 67.18
|
Using JSTL
JSTL includes a wide variety of tags that naturally fit into discrete functional areas. Therefore, JSTL is exposed via multiple URIs to clearly show the functional areas it covers and give each area its own namespace. Table 6-1 summarizes these functional areas, subfunctions in each area, tags in each subfunction, and the prefixes used in the Duke's Bookstore application.
The URIs used to access the libraries are:
- Core:
- XML:
- Internationalization:
- SQL:
The JSTL tag libraries comes in two versions (see Twin Libraries). The URIs for the JSTL-EL library are as shown above. The URIs for the JSTL-RT library are named append _
rtto the end.
All of the material in The J2EE Tutorial for the Sun ONE Platform is copyright-protected and may not be published in other works without express written permission from Sun Microsystems.
|
http://docs.oracle.com/javaee/1.3/tutorial/doc/JSTL3.html
|
CC-MAIN-2014-15
|
refinedweb
| 142
| 60.04
|
Introduction: Wireless Robot V2 (Support WiFi & Bluetooth)
Step 1: Car Structure
I get The motors ,gear box and tires from two broken car kids toys from Scrap market nearby , they cost me about 10$ and I connect them using din rail and plastic as shown in the pictures.
Step 2: Connection Digram
Using the Serial pins of ATMEGA16 MCU is The Concept of this circuit and it consist from :
1- H-Bridge Motor driver using two L298.
2- Atmega16 MCU.
3- LEDs Driver using ULN2003.
4- Serial Bluetooth Module.
5- Serial-To-Ethernet converter.
6- WiFi Wireless access point.
7- Easy N surveillance IP cam.
8- 12 Volt battery with 7809 & 7805 voltage regulator for router and IP cam .
9- 6 Volt Battery for motors.
Step 3: H-Bridge Circuits
two L298 dual H-bridge driver connected as shown in the pictures .
the motors working on 6 volt with 2.4 Amp. maximum current for each.
L298 Data sheet attached.
Step 4: LED Driver
ULN2003 Darlington transistor arrays is a fast solution and provide 7 output.
The circuit shown in attached picture.
Datasheet attached.
.
Step 5: Controller Circuit
The source code debugged by ATMEL AVR studio 4 with AVR MKII ISP .
The Scenario of the code depending on:
1-Enabling the RX/TX of ATMEGA16 MCU.
2-Sending the ASCII code from the PC or Tablet .
3-Translate the ASCII code by the MCU to a specific output in port A and port C.
The fuse bit for 16 Mhz external frequency resonator should be set to :High: 0xC9 , Low: 0xFF as shown in the attached picture.
Code:
/*
ATmega16 16MHz external frequency resonator
Baud Rate 9600 No Parity,1 Stop Bit,Flow Control:None
*/
#include <avr/io.h>
#include <inttypes.h>
#include <util/delay.h>
void USARTInit(uint16_t ubrr_value)
{
//Set Baud rate
UBRRL = ubrr_value;
UBRRH = (ubrr_value>>8);
UCSRC=(1<<URSEL)|(3<<UCSZ0); // Set Asynchronous mode,No Parity ,1 StopBit
UCSRB=(1<<RXEN)|(1<<TXEN); //Enable The receiver and transmitter
}
char USARTReadChar()
{
while(!(UCSRA & (1<<RXC)))
{
// do nothing
}
return UDR;
}
void USARTWriteChar(char data)
{
while(!(UCSRA & (1<<UDRE)))
{
//do nothing
}
UDR=data;
}
void main()
{
DDRC=0xff;
DDRA=0xff;
char data;
USARTInit(103); //for 16Mhz and baud 9600 UBRR = 103 and for baud 19200 UBRR = 51
while(1)
{
data=USARTReadChar();
if (data==0x71){PORTC=0b10000000;USARTWriteChar('Q');} //q in ascii
if (data==0x77){PORTC=0b00001001;USARTWriteChar('w');} //w in ascii Forward
if (data==0x65){PORTC=0b01000000;USARTWriteChar('e');} //e in ascii
if (data==0x61){PORTC=0b00000011;USARTWriteChar('A');} //a in ascii Left
if (data==0x73){PORTC=0b00000000;USARTWriteChar('s');} //s in ascii Stop
if (data==0x64){PORTC=0b00001100;USARTWriteChar('d');} //d in ascii Right
if (data==0x7A){PORTC=0b00100000;USARTWriteChar('z');} //z in ascii
if (data==0x78){PORTC=0b10000110;USARTWriteChar('x');} //x in ascii Backward
if (data==0x99){PORTC=0b11110000;USARTWriteChar('c');} //c in ascii
if (data==0x69){PORTC=0b00001001;_delay_ms(200);PORTC=0b00000000;} //i in ascii Forward
if (data==0x6A){PORTC=0b00000011;_delay_ms(200);PORTC=0b00000000;} //j in ascii Left
if (data==0x6C){PORTC=0b00001100;_delay_ms(200);PORTC=0b00000000;} //l in ascii Right
if (data==0x6B){PORTC=0b00000110;_delay_ms(200);PORTC=0b00000000;} //k in ascii Back
if (data==0x31){PORTA=0b00000001;USARTWriteChar('1');} //1 in ascii //2 LED On
if (data==0x32){PORTA=0b00000010;USARTWriteChar('2');} //2 in ascii //4 LED on
if (data==0x33){PORTA=0b00000111;USARTWriteChar('3');} //3 in ascii //6 LED on
if (data==0x34){PORTA=0b00001000;USARTWriteChar('4');} //4 in ascii //Red LED on
if (data==0x35){PORTA=0b00010000;USARTWriteChar('5');} //5 in ascii
if (data==0x36){PORTA=0b00100000;USARTWriteChar('6');} //6 in ascii
if (data==0x37){PORTA=0b01000000;USARTWriteChar('7');} //7 in ascii
if (data==0x38){PORTA=0b10000000;USARTWriteChar('8');} //8 in ascii
if (data==0x39){PORTA=0b00000000;USARTWriteChar('9');} //9 in ascii //All Off
else {}
}
}
Step 6: Serial to Ethernet Converter
I get the module from below website :
Connect:
VDD to 5 Volt.
GND to negative.
RX to TX of ATmega16.
TX to RX of Atmega16.
CFG :Normal Mode when connected to positive , Configuration mode when connected to negative.
Also RS232 to TTL module required to configure this module .
You can download all the documents from the mentioned website.
For this Project I configure the setting for the converter as below:
Work Mode : TCP/IP clinet.
Module IP : 192.168.1.2
Subnet Mask:255.255.255.0
Default gateway :192.168.1.1 (Access point IP).
Parity/data/stop :None /8/1
Destination IP :192.168.1.3 (Laptop IP).
Destination Port: 8234.
Baud Rate : 9600
Step 7: Com-Redirector Software
Com-Redirector software used to create Virtual com in the Laptop because there is no direct physical connection to the laptop ,you can also use virtual serial ports emulator software from
Putty or hyper terminal software can be used to send information through this virtual serial port.
please watch the video for more information .
Step 8: Serial Bluetooth Module
I get the Slave Mode Serial Bluetooth module from below web site(all document included):
Pass Key:1234
About the software you can download android Bluetooth Controller software from android market ,its amazing software and easy to use.
Step 9: Switch Selector
in the end I connect a selector switch to select the wireless mode (WiFi or Bluetooth).
71 Discussions
sir can you please say me how to draw the data flow diagram for this robot??
Hey, Sorry to ask but please.. can you mention the links of these items which are sold on online international shopping websites.. the links you mentioned is not delivered in my country(India)...for ex. Amazon, Flipkart, Snapdeal, Ebay.
ThankYou
(mustafa.saifee@live.com)
Hi Husham Samir, you're project wonderful(Wireless Robot V2), i want to create it, can i have pdf and program this project. please send article to my email:mohsennoruzi@gmail.com. thankyou
dear sir
nice and great work. i like to test this project, can i get PDF file of this full project?
please can you send it to my email. my Email is Indrajith105@gmail.com. thank you,
best regards
hi
instead of using serial to ethernet converter, i'm using serial to usb and usb to ethernet combo. But i still don't know how to send control values from putty to the microcontroller for the controlling part. Can you help me plz?
okay explain more how did you connect these two things together and how you connected to MCU.
for putty and how to send Information ,I previously told you to read the below tutorial from extreme electronic , every things mentioned there .
How to send control the rover by putty? what values do i have to input in putty to move the rover? i saw your video but couldn't understand it. Help please!
from where did you download this com redirector software?
CD included with the converter
but isn't the software free for download
well I get it free when I bought the converter.
Which converter?
I'm sorry to bother you, but can u please send me the com redirector software as a zip file to my mail id.I will be highly grateful to you if you did.
Step 6: Serial to Ethernet Converter
I bought a TP-LINK wireless N ADSL2+ modem router for the wi-fi access point. I just want to know how to give a portable power supply for the router that will be placed on the rover.
First of all check the DC voltage needed in your wireless TP link.
in my case TP link took 9 volt 1 Amp so I use L7809CT voltage regulator TO-3 type
Can you help me in configuring the router?
read the quick manual for this router
set the wireless as access point (AP) and put an SSID name for it.
I am stuck with pairing of Bluetooth module and mobile android app.
It is asking pass key again and again
Hi if you are using the same bluetooth module ,the pass key is 1234
if it didn't work check the web site belong for this bluetooth module.
|
https://www.instructables.com/id/Wireless-Robot-V2-Support-WiFi-Bluetooth/
|
CC-MAIN-2018-39
|
refinedweb
| 1,345
| 64.41
|
Working with Graphs
Graphs allow us to understand complex networks by focusing on relationships between pairs of items. Each item is represented by a vertex in the graph, and relationships between items are represented by edges.
To facilitate graph-oriented data analysis, GraphLab Create offers a SGraph object, a scalable graph data structure backed by SFrames. In this chapter, we show that SGraphs allow arbitrary dictionary attributes on vertices and edges, flexible vertex and edge query functions, and seamless transformation to and from SFrames.
Creating an SGraph
There are several ways to create an SGraph. The simplest is to start with an
empty graph, then add vertices and edges in the form of lists of graphlab.Vertex
and graphlab.Edge objects. SGraphs are structually
immutable; in the following snippet,
add_vertices and
add_edges both return
a new graph.
from graphlab import SGraph, Vertex, Edge g = SGraph() verts = [Vertex(0, attr={'breed': 'labrador'}), Vertex(1, attr={'breed': 'labrador'}), Vertex(2, attr={'breed': 'vizsla'})] g = g.add_vertices(verts) g = g.add_edges(Edge(1, 2)) print g
SGraph({'num_edges': 1, 'num_vertices': 3}) Vertex Fields:['__id', 'breed'] Edge Fields:['__src_id', '__dst_id']
We can chain these steps together to make a new graph in a single line.
g = SGraph().add_vertices([Vertex(i) for i in range(10)]).add_edges( [Edge(i, i+1) for i in range(9)])
SGraphs can also be created from an edge list stored in an SFrame. Vertices are added to the graph automatically based on the edge list, and columns of the SFrame not used as source or destination vertex IDs are assumed to be edge attributes. For this example we download a dataset of James Bond characters to an SFrame, then build the graph.
from graphlab import SFrame edge_data = SFrame.read_csv( '') g = SGraph() g = g.add_edges(edge_data, src_field='src', dst_field='dst') print g
SGraph({'num_edges': 20, 'num_vertices': 10})
The SGraph constructor also accepts vertex and edge SFrames directly. We can construct the same James Bond graph with the following two lines:
vertex_data = SFrame.read_csv('') g = SGraph(vertices=vertex_data, edges=edge_data, vid_field='name', src_field='src', dst_field='dst')
Finally, an SGraph can be created directly from a file, either local or remote, using the graphlab.load_sgraph() method. Loading a graph with this method works with both the native binary save format and a variety of text formats. In the following example we save the SGraph in binary format to a new folder called "james_bond", then re-load it under a different name.
g.save('james_bond') new_graph = graphlab.load_sgraph('james_bond')
Inspecting SGraphs
Small graphs can be explored very efficiently with the
SGraph.show method,
which displays a plot of the graph. The vertex labels can be IDs or any vertex
attribute.
g.show(vlabel='id', highlight=['James Bond', 'Moneypenny'], arrows=True)
For large graphs visual displays are difficult, but graph exploration can still
be done with the
SGraph.summary---which prints the number of vertices and
edges---or by retrieving and plotting subsets of edges and vertices.
print g.summary()
{'num_edges': 20, 'num_vertices': 10}
To retrieve the contents of an SGraph, the
get_vertices and
get_edges
methods return SFrames. These functions can filter edges and vertices based on
vertex IDs or attributes. Omitting IDs and attributes returns all vertices or
edges.
sub_verts = g.get_vertices(ids=['James Bond']) print sub_verts
+------------+--------+-----------------+---------+ | __id | gender | license_to_kill | villian | +------------+--------+-----------------+---------+ | James Bond | M | 1 | 0 | +------------+--------+-----------------+---------+ [1 rows x 4 columns]
sub_edges = g.get_edges(fields={'relation': 'worksfor'}) print sub_edges
+---------------+-------------+----------+ | __src_id | __dst_id | relation | +---------------+-------------+----------+ | M | Moneypenny | worksfor | | M | James Bond | worksfor | | M | Q | worksfor | | Elliot Carver | Henry Gupta | worksfor | | Elliot Carver | Gotz Otto | worksfor | +---------------+-------------+----------+ [5 rows x 3 columns]
The
get_neighborhood
method provides a convenient way to retrieve the subset
of a graph near a set of target vertices, also known as the egocentric
neighborhood of the target vertices. The
radius of the neighborhood is the
maximum length of a path between any of the targets and a neighborhood vertex.
If
full_subgraph is true, then edges between neighborhood vertices are
included even if the edges are not on direct paths between a target and a
neighbor.
targets = ['James Bond', 'Moneypenny'] subgraph = g.get_neighborhood(ids=targets, radius=1, full_subgraph=True) subgraph.show(vlabel='id', highlight=['James Bond', 'Moneypenny'], arrows=True)
Modifying SGraphs
SGraphs are structually immutable, but the data stored on vertices and edges
can be mutated using two special SGraph properties.
SGraph.vertices and
SGraph.edges are SFrames containing the vertex and edge data, respectively.
The following examples show the difference between the special graph-related
SFrames and normal SFrames. First, note that the following lines both produce
the same effect.
g.edges.print_rows(5) g.get_edges().print_rows(5)
+----------------+----------------+------------+ | __src_id | __dst_id | relation | +----------------+----------------+------------+ | Moneypenny | M | managed_by | | Inga Bergstorm | James Bond | friend | | Moneypenny | Q | colleague | | Henry Gupta | Elliot Carver | killed_by | | James Bond | Inga Bergstorm | friend | +----------------+----------------+------------+ [5 rows x 3 columns]
The difference is that the return value of
g.get_edges() is a normal SFrame
indepedent from
g, whereas
g.edges is bound to
g. We can modify the edge
data using this special edge SFrame. The next snippet mutates the relation
attribute on the edges of
g. In particular, it extracts the first letter and
converts it to upper case.
g.edges['relation'] = g.edges['relation'].apply(lambda x: x[0].upper())]
On the other hand, the following code does not mutate the relation attribute on
the edges of
g. If it had a permanent effect, the relation field would be
converted a lower case letter, but in the result it clearly remains upper case.
e = g.get_edges() # e is a normal SFrame independent of g. e['relation'] = e['relation'].apply(lambda x: x[0].lower())]
Calling a method like
head(),
tail(), or
append() on a special
graph-related SFrame also results in a new instance of a regular SFrame. For
example, the following code does not mutate
g.
e = g.edges.head(5) e['is_friend'] = e['relation'].apply(lambda x: x[0] == 'F')
Another important difference of these two special SFrames is that the
__id,
__src_id, and
__dst_id fields are not mutable because changing them would
change the structure of the graph and SGraph is structually immutable.
Otherwise,
g.vertices and
g.edges act like normal SFrames, which makes
modifying graph data very easy. For example, adding (removing) an edge
field is the same as adding (removing) a column to (from) an SFrame:
g.edges['weight'] = 1.0 del g.edges['weight']
The
triple_apply
method provides a particularly powerful way to modify SGraph
vertex and edge attributes.
triple_apply applies a user-defined function to
all edges asynchronously, allowing you to do a computation that modifies edge
data based on vertex data, or vice versa. A wide range of
methods---single-source shortest
path and weighted
PageRank, for example---can be expressed
very simply with this primitive.
The first step is to define a function that takes as input an edge in the graph, together with the incident source and destination vertices. This triple apply function modifies vertex and edge fields in some way, then returns the modified (source vertex, edge, destination vertex) triple. In this example, we compute the degree of each vertex in the James Bond graph, which is the number of edges that touch each vertex.
def increment_degree(src, edge, dst): src['degree'] += 1 dst['degree'] += 1 return (src, edge, dst)
The next step is to create a new field in our SGraph's vertex data to hold the answer.
g.vertices['degree'] = 0
Finally, we use the
triple_apply method to apply the function to all of the
edges (together with their incident source and destination vertices). This
method requires specification of which fields are allowed to be changed by the
our function.
g = g.triple_apply(increment_degree, mutated_fields=['degree']) print g.vertices.sort('degree', ascending=False)
+----------------+--------+--------+-----------------+---------+ | __id | degree | gender | license_to_kill | villian | +----------------+--------+--------+-----------------+---------+ | James Bond | 8 | M | 1 | 0 | | Elliot Carver | 7 | M | 0 | 1 | | M | 6 | M | 1 | 0 | | Moneypenny | 4 | F | 1 | 0 | | Q | 4 | M | 1 | 0 | | Paris Carver | 3 | F | 0 | 1 | | Inga Bergstorm | 2 | F | 0 | 0 | | Henry Gupta | 2 | M | 0 | 1 | | Wai Lin | 2 | F | 1 | 0 | | Gotz Otto | 2 | M | 0 | 1 | +----------------+--------+--------+-----------------+---------+ [10 rows x 5 columns]
James Bond is quite the popular guy!
To learn more, check out the graph analytics toolkits, the API Reference for SGraphs, the hands-on exercises at the end of the chapter.
|
https://turi.com/learn/userguide/sgraph/sgraph.html
|
CC-MAIN-2017-09
|
refinedweb
| 1,380
| 54.12
|
Python’s
not operator allows you to invert the truth value of Boolean expressions and objects. You can use this operator in Boolean contexts, such as
if statements and
while loops. It also works in non-Boolean contexts, which allows you to invert the truth value of your variables.
Using the
not operator effectively will help you write accurate negative Boolean expressions to control the flow of execution in your programs.
In this tutorial, you’ll learn:
- How Python’s
notoperator works
- How to use the
notoperator in Boolean and non-Boolean contexts
- How to use the
operator.not_()function to perform logical negation
- How and when to avoid unnecessary negative logic in your code
You’ll also code a few practical examples that will allow you to better understand some of the primary use cases of the
not operator and the best practices around its use. To get the most out of this tutorial, you should have some previous knowledge about Boolean logic, conditional statements, and
while loops.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
Working With Boolean Logic in Python
George Boole put together what is now known as Boolean algebra, which relies on true and false values. It also defines a set of Boolean operations:
AND,
OR, and
NOT. These Boolean values and operators are helpful in programming because they help you decide the course of action in your programs.
In Python, the Boolean type,
bool, is a subclass of
int:
>>> issubclass(bool, int) True >>> help(bool) Help on class bool in module builtins: class bool(int) bool(x) -> bool ...
This type has two possible values,
True and
False, which are built-in constants in Python and must be capitalized. Internally, Python implements them as integer numbers:
>>> type(True) <class 'bool'> >>> type(False) <class 'bool'> >>> isinstance(True, int) True >>> isinstance(False, int) True >>> int(True) 1 >>> int(False) 0
Python internally implements its Boolean values as
1 for
True and
0 for
False. Go ahead and execute
True + True in your interactive shell to see what happens.
Python provides three Boolean or logical operators:
With these operators, you can build expressions by connecting Boolean expressions with each other, objects with each other, and even Boolean expressions with objects. Python uses English words for the Boolean operators. These words are keywords of the language, so you can’t use them as identifiers without causing a syntax error.
In this tutorial, you’ll learn about Python’s
not operator, which implements the logical
NOT operation or negation.
Getting Started With Python’s
not Operator
The
not operator is the Boolean or logical operator that implements negation in Python. It’s unary, which means that it takes only one operand. The operand can be a Boolean expression or any Python object. Even user-defined objects work. The task of
not is to reverse the truth value of its operand.
If you apply
not to an operand that evaluates to
True, then you get
False as a result. If you apply
not to a false operand, then you get
True:
>>> not True False >>> not False True
The
not operator negates the truth value of its operand. A true operand returns
False. A false operand returns
True. These two statements uncover what is commonly known as the truth table of
not:
With
not, you can negate the truth value of any Boolean expression or object. This functionality makes it worthwhile in several situations:
- Checking unmet conditions in the context of
ifstatements and
whileloops
- Inverting the truth value of an object or expression
- Checking if a value is not in a given container
- Checking for an object’s identity
In this tutorial, you’ll find examples that cover all these use cases. To kick things off, you’ll start by learning how the
not operator works with Boolean expressions and also with common Python objects.
A Boolean expression always returns a Boolean value. In Python, this kind of expression returns
True or
False. Say you want to check if a given numeric variable is greater than another:
>>> x = 2 >>> y = 5 >>> x > y False >>> not x > y True
The expression
x > y always returns
False, so you can say it’s a Boolean expression. If you place
not before this expression, then you get the inverse result,
True.
Note: Python evaluates operators according to a strict order, commonly known as operator precedence.
For example, Python evaluates math and comparison operators first. Then it evaluates logical operators, including
not:
>>> not True == False True >>> False == not True File "<stdin>", line 1 False == not True ^ SyntaxError: invalid syntax >>> False == (not True) True
In the first example, Python evaluates the expression
True == False and then negates the result by evaluating
not.
In the second example, Python evaluates the equality operator (
==) first and raises a
SyntaxError because there’s no way to compare
False and
not. You can surround the expression
not True with parentheses (
()) to fix this problem. This quick update tells Python to evaluate the parenthesized expression first.
Among logical operators,
not has higher precedence than the
and operator and the
or operator, which have the same precedence.
You can also use
not with common Python objects, such as numbers, strings, lists, tuples, dictionaries, sets, user-defined objects, and so
In each example,
not negates the truth value of its operand. To determine whether a given object is truthy or falsy, Python uses
bool(), which returns
True or
False depending on the truth value of the object at hand.
This built-in function internally uses the following rules to figure out the truth value of its input:)
Once
not knows the truth value of its operand, it returns the opposite Boolean value. If the object evaluates to
True, then
not returns
False. Otherwise, it returns
True.
Note: Always returning
True or
False is an important difference between
not and the other two Boolean operators, the
and operator and the
or operator.
The
and operator and the
or operator return one of the operands in an expression, while the
not operator always returns a Boolean value:
>>> 0 and 42 0 >>> True and False False >>> True and 42 > 27 True >>> 0 or 42 42 >>> True or False True >>> False or 42 < 27 False >>> not 0 True >>> not 42 False >>> not True False
With the
and operator and the
or operator, you get
True or
False back from the expression when one of these values explicitly results from evaluating the operands. Otherwise, you get one of the operands in the expression. On the other hand,
not behaves differently, returning
True or
False regardless of the operand it takes.
To behave like the
and operator and the
or operator, the
not operator would have to create and return new objects, which is often ambiguous and not always straightforward. For example, what if an expression like
not "Hello" returned an empty string (
"")? What would an expression like
not "" return? That’s the reason why the
not operator always returns
True or
False.
Now that you know how
not works in Python, you can dive into more specific use cases of this logical operator. In the following section, you’ll learn about using
not in Boolean contexts.
Using the
not Operator in Boolean Contexts
Like the other two logical operators, the
not operator is especially useful in Boolean contexts. In Python, you have two statements that define Boolean contexts:
ifstatements let you perform conditional execution and take different courses of action based on some initial conditions.
whileloops let you perform conditional iteration and run repetitive tasks while a given condition is true.
These two structures are part of what you’d call control flow statements. They help you decide a program’s execution path. In the case of the
not operator, you can use it to select the actions to take when a given condition is not met.
if Statements
You can use the
not operator in an
if statement to check if a given condition is not met. To make an
if statement test if something didn’t happen, you can put the
not operator in front of the condition at hand. Since the
not operator returns the negated result, something true becomes
False and the other way around.
The syntax for an
if statement with the
not logical operator is:
if not condition: # Do something...
In this example,
condition could be a Boolean expression or any Python object that makes sense. For example,
condition can be a variable containing a string, a list, a dictionary, a set, and even a user-defined object.
If
condition evaluates to false, then
not returns
True and the
if code block runs. If
condition evaluates to true, then
not returns
False and the
if code block doesn’t execute.
A common situation is one where you use a predicate or Boolean-valued function as a
condition. Say you want to check if a given number is prime before doing any further processing. In that case, you can write an
is_prime() function:
>>> import math >>> def is_prime(n): ... if n <= 1: ... return False ... for i in range(2, int(math.sqrt(n)) + 1): ... if n % i == 0: ... return False ... return True ... >>> # Work with prime numbers only >>> number = 3 >>> if is_prime(number): ... print(f"{number} is prime") ... 3 is prime
In this example,
is_prime() takes an integer number as an argument and returns
True if the number is prime. Otherwise, it returns
False.
You can also use this function in a negative conditional statement to approach those situations where you want to work with composite numbers only:
>>> # Work with composite numbers only >>> number = 8 >>> if not is_prime(number): ... print(f"{number} is composite") ... 8 is composite
Since it’s also possible that you need to work with composite numbers only, you can reuse
is_prime() by combining it with the
not operator as you did in this second example.
Another common situation in programming is to find out if a number is inside a specific numeric interval. To determine if a number
x is in a given interval in Python, you can use the
and operator or you can chain comparison operators appropriately:
>>> x = 30 >>> # Use the "and" operator >>> if x >= 20 and x < 40: ... print(f"{x} is inside") ... 30 is inside >>> # Chain comparison operators >>> if 20 <= x < 40: ... print(f"{x} is inside") ... 30 is inside
In the first example, you use the
and operator to create a compound Boolean expression that checks if
x is between
20 and
40. The second example makes the same check but using chained operators, which is a best practice in Python.
Note: In most programming languages, the expression
20 <= x < 40 doesn’t make sense. It would start by evaluating
20 <= x, which is true. The next step would be to compare that true result with
40, which doesn’t make much sense, so the expression fails. In Python, something different happens.
Python internally rewrites this type of expression to an equivalent
and expression, such as
x >= 20 and x < 40. It then performs the actual evaluation. That’s why you get the correct result in the example above.
You may also face the need to check if a number is outside of the target interval. To this end, you can use the
or operator:
>>> x = 50 >>> if x < 20 or x >= 40: ... print(f"{x} is outside") ... 50 is outside
This
or expression allows you to check if
x is outside the
20 to
40 interval. However, if you already have a working expression that successfully checks if a number is in a given interval, then you can reuse that expression to check the opposite condition:
>>> x = 50 >>> # Reuse the chained logic >>> if not (20 <= x < 40): ... print(f"{x} is outside") 50 is outside
In this example, you reuse the expression you originally coded to determine if a number is inside a target interval. With
not before the expression, you check if
x is outside the
20 to
40 interval.
while Loops
The second Boolean context in which you can use the
not operator is in your
while loops. These loops iterate while a given condition is met or until you jump out of the loop by using
break, using
return, or raising an exception. Using
not in a
while loop allows you to iterate while a given condition is not met.
Say you want to code a small Python game to guess a random number between 1 and 10. As a first step, you decide to use
input() to capture the user’s name. Since the name is a requirement for the rest of the game to work, you need to make sure you get it. To do that, you can use a
while loop that asks for the user’s name until the user provides a valid one.
Fire up your code editor or IDE and create a new
guess.py file for your game. Then add the following code:
1# guess.py 2 3from random import randint 4 5secret = randint(1, 10) 6 7print("Welcome!") 8 9name = "" 10while not name: 11 name = input("Enter your name: ").strip()
In
guess.py, you first import
randint() from
random. This function allows you to generate random integer numbers in a given range. In this case, you’re generating numbers from
1 to
10, both included. Then you print a welcoming message to the user.
The
while loop on line 10 iterates until the user provides a valid name. If the user provides no name by just pressing Enter, then
input() returns an empty string (
"") and the loop runs again because
not "" returns
True.
Now you can continue with your game by writing the code to provide the guessing functionality. You can do it by yourself, or you can expand the box below to check out a possible implementation.
The second part of the game should allow the user to enter a number from 1 to 10 as their guess. The game should compare the user’s input with the current secret number and take actions accordingly. Here’s a possible implementation:
while True: user_input = input("Guess a number between 1 and 10: ") if not user_input.isdigit(): user_input = input("Please enter a valid number: ") guess = int(user_input) if guess == secret: print(f"Congrats {name}! You win!") break elif guess > secret: print("The secret number is lower than that...") else: print("The secret number is greater than that...")
You use an infinite
while loop to take the user’s input until they guess the
secret number. In every iteration, you check if the input matches
secret and provide clues to the user according to the result. Go ahead and give it a try!
As an exercise, you can restrict the number of attempts before the user loses the game. Three attempts could be a nice option in this case.
How was your experience with this little game? To learn more about game programming in Python, check out PyGame: A Primer on Game Programming in Python.
Now that you know how to use
not in Boolean contexts, it’s time to learn about using
not in non-Boolean contexts. That’s what you’ll do in the following section.
Using the
not Operator in Non-Boolean Contexts
Since the
not operator can also take regular objects as an operand, you can use it in non-Boolean contexts too. In other words, you can use it outside of an
if statement or a
while loop. Arguably, the most common use case of the
not operator in a non-Boolean context is to invert the truth value of a given variable.
Suppose you need to perform two different actions alternatively in a loop. In that case, you can use a flag variable to toggle actions in every iteration:
>>> toggle = False >>> for _ in range(4): ... print(f"toggle is {toggle}") ... if toggle: ... # Do something... ... toggle = False ... else: ... # Do something else... ... toggle = True ... toggle is False toggle is True toggle is False toggle is True
Every time this loop runs, you check the truth value of
toggle to decide which course of action to take. At the end of each code block, you change the value of
toggle so you can run the alternative action in the next iteration. Changing the value of
toggle requires you to repeat a similar logic twice, which might be error-prone.
You can use the
not operator to overcome this drawback and make your code cleaner and safer:
>>> toggle = False >>> for _ in range(4): ... print(f"toggle is {toggle}") ... if toggle: ... pass # Do something... ... else: ... pass # Do something else... ... toggle = not toggle ... toggle is False toggle is True toggle is False toggle is True
Now the highlighted line alternates the value of
toggle between
True and
False using the
not operator. This code is cleaner, less repetitive, and less error-prone than the example you wrote before.
Using the Function-Based
not Operator
Unlike the
and operator and the
or operator, the
not operator has an equivalent function-based implementation in
operator. The function is called
not_(). It takes an object as an argument and returns the same outcome as an equivalent
not obj expression:
>>> from operator import
To use
not_(), you first need to import it from
operator. Then you can use the function with any Python object or expression as an argument. The result is the same as using an equivalent
not expression.
Note: Python also has
and_() and
or_() functions. However, they reflect their corresponding bitwise operators rather than the Boolean ones.
The
and_() and
or_() functions also work with Boolean arguments:
>>> from operator import and_, or_ >>> and_(False, False) False >>> and_(False, True) False >>> and_(True, False) False >>> and_(True, True) True >>> or_(False, False) False >>> or_(False, True) True >>> or_(True, False) True >>> or_(True, True) True
In these examples, you use
and_() and
or_() with
True and
False as arguments. Note that the result of the expressions matches the truth table of the
and and
not operators, respectively.
Using the
not_() function instead of the
not operator is handy when you’re working with higher-order functions, such as
map(),
filter(), and the like. Here’s an example that uses the
not_() function along with
sorted() to sort a list of employees by placing empty employee names at the end of the list:
>>> from operator import not_ >>> employees = ["John", "", "", "Jane", "Bob", "", "Linda", ""] >>> sorted(employees, key=not_) ['John', 'Jane', 'Bob', 'Linda', '', '', '', '']
In this example, you have an initial list called
employees that holds a bunch of names. Some of those names are empty strings. The call to
sorted() uses
not_() as a
key function to create a new list that sorts the employees, moving the empty names to the end of the list.
Working With Python’s
not Operator: Best Practices
When you’re working with the
not operator, you should consider following a few best practices that can make your code more readable, clean, and Pythonic. In this section, you’ll learn about some of these best practices related to using the
not operator in the context of membership and identity tests.
You’ll also learn how negative logic can impact the readability of your code. Finally, you’ll learn about some handy techniques that can help you avoid unnecessary negative logic, which is a programming best practice.
Test for Membership
Membership tests are commonly useful when you’re determining if a particular object exists in a given container data type, such as a list, tuple, set, or dictionary. To perform this kind of test in Python, you can use the
in operator:
>>> numbers = [1, 2, 3, 4] >>> 3 in numbers True >>> 5 in numbers False
The
in operator returns
True if the left-side object is in the container on the right side of the expression. Otherwise, it returns
False.
Sometimes you may need to check if an object is not in a given container. How can you do that? The answer to this question is the
not operator.
There are two different syntaxes to check if an object is not in a given container in Python. The Python community considers the first syntax as bad practice because it’s difficult to read. The second syntax reads like plain English:
>>> # Bad practice >>> not "c" in ["a", "b", "c"] False >>> # Best practice >>> "c" not in ["a", "b", "c"] False
The first example works. However, the leading
not makes it difficult for someone reading your code to determine if the operator is working on
"c" or on the whole expression,
"c" in ["a", "b", "c"]. This detail makes the expression difficult to read and understand.
The second example is much clearer. The Python documentation refers to the syntax in the second example as the
not in operator. The first syntax can be a common practice for people who are starting out with Python.
Now it’s time to revisit the examples where you checked if a number was inside or outside a numeric interval. If you’re working with integer numbers only, then the
not in operator provides a more readable way to perform this check:
>>> x = 30 >>> # Between 20 and 40 >>> x in range(20, 41) True >>> # Outside 20 and 40 >>> x not in range(20, 41) False
The first example checks if
x is inside the
20 to
40 range or interval. Note that you use
41 as the second argument to
range() to include
40 in the check.
When you’re working with integer numbers, this small trick about where exactly you use the
not operator can make a big difference regarding code readability.
Check the Identity of Objects
Another common requirement when you’re coding in Python is to check for an object’s identity. You can determine an object’s identity using
id(). This built-in function takes an object as an argument and returns an integer number that uniquely identifies the object at hand. This number represents the object’s identity.
The practical way to check for identity is to use the
is operator, which is pretty useful in some conditional statements. For example, one of the most common use cases of the
is operator is to test if a given object is
None:
>>> obj = None >>> obj is None True
The
is operator returns
True when the left operand has the same identity as the right operand. Otherwise, it returns
False.
In this case, the question is: how do you check if two objects don’t have the same identity? Again, you can use two different syntaxes:
>>> obj = None >>> # Bad practice >>> not obj is None False >>> # Best practice >>> obj is not None False
In both examples, you check if
obj has the same identity as the
None object. The first syntax is somewhat difficult to read and non-Pythonic. The
is not syntax is way more explicit and clear. The Python documentation refers to this syntax as the
is not operator and promotes its use as a best practice.
Avoid Unnecessary Negative Logic
The
not operator enables you to reverse the meaning or logic of a given condition or object. In programming, this kind of feature is known as negative logic or negation.
Using negative logic correctly can be tricky because this logic is difficult to think about and understand, not to mention hard to explain. In general, negative logic implies a higher cognitive load than positive logic. So, whenever possible, you should use positive formulations.
Here is an example of a
custom_abs() function that uses a negative condition to return the absolute value of an input number:
>>> def custom_abs(number): ... if not number < 0: ... return number ... return -number ... >>> custom_abs(42) 42 >>> custom_abs(-42) 42
This function takes a number as an argument and returns its absolute value. You can achieve the same result by using positive logic with a minimal change:
>>> def custom_abs(number): ... if number < 0: ... return -number ... return number ... >>> custom_abs(42) 42 >>> custom_abs(-42) 42
That’s it! Your
custom_abs() now uses positive logic. It’s more straightforward and understandable. To get this result, you removed
not and moved the negative sign (
-) to modify the input
number when it’s lower than
0.
Note: Python provides a built-in function called
abs() that returns the absolute value of a numeric input. The purpose of
custom_abs() is to facilitate the topic presentation.
You can find many similar examples in which changing a comparison operator can remove unnecessary negative logic. Say you want to check if a variable
x is not equal to a given value. You can use two different approaches:
>>> x = 27 >>> # Use negative logic >>> if not x == 42: ... print("not 42") ... not 42 >>> # Use positive logic >>> if x != 42: ... print("not 42") ... not 42
In this example, you remove the
not operator by changing the comparison operator from equal (
==) to different (
!=). In many cases, you can avoid negative logic by expressing the condition differently with an appropriate relational or equality operator.
However, sometimes negative logic can save you time and make your code more concise. Suppose you need a conditional statement to initialize a given file when it doesn’t exist in the file system. In that case, you can use
not to check if the file doesn’t exist:
from pathlib import Path file = Path("/some/path/config.ini") if not file.exists(): # Initialize the file here...
The
not operator allows you to invert the result of calling
.exists() on
file. If
.exists() returns
False, then you need to initialize the file. However, with a false condition, the
if code block doesn’t run. That’s why you need the
not operator to invert the result of
.exists().
Note: The example above uses
pathlib from the standard library to handle file paths. To dive deeper into this cool library, check out Python 3’s pathlib Module: Taming the File System.
Now think of how to turn this negative conditional into a positive one. Up to this point, you don’t have any action to perform if the file exists, so you may think of using a
pass statement and an additional
else clause to handle the file initialization:
if file.exists(): pass # YAGNI else: # Initialize the file here...
Even though this code works, it violates the “You aren’t gonna need it” (YAGNI) principle. It’s an especially determined attempt to remove negative logic.
The idea behind this example is to show that sometimes using negative logic is the right way to go. So, you should consider your specific problem and select the appropriate solution. A good rule of thumb would be to avoid negative logic as much as possible without trying to avoid it at all costs.
Finally, you should pay special attention to avoiding double negation. Say you have a constant called
NON_NUMERIC that holds characters that Python can’t turn into numbers, such as letters and punctuation marks. Semantically, this constant itself implies a negation.
Now say you need to check if a given character is a numeric value. Since you already have
NON_NUMERIC, you can think of using
not to check the condition:
if char not in NON_NUMERIC: number = float(char) # Do further computations...
This code looks odd, and you probably won’t ever do something like this in your career as a programmer. However, doing something similar can sometimes be tempting, such as in the example above.
This example uses double negation. It relies on
NON_NUMERIC and also on
not, which makes it hard to digest and understand. If you ever get to a piece of code like this, then take a minute to try writing it positively or, at least, try to remove one layer of negation.
Conclusion
Python’s
not is a logical operator that inverts the truth value of Boolean expressions and objects. It’s handy when you need to check for unmet conditions in conditional statements and
while loops.
You can use the
not operator to help you decide the course of action in your program. You can also use it to invert the value of Boolean variables in your code.
In this tutorial, you learned how to:
- Work with Python’s
notoperator
- Use the
notoperator in Boolean and non-Boolean contexts
- Use
operator.not_()to perform logical negation in a functional style
- Avoid unnecessary negative logic in your code whenever possible
To these ends, you coded a few practical examples that helped you understand some of the main use cases of the
not operator, so you’re now better prepared to use it in your own code.
|
https://realpython.com/python-not-operator/
|
CC-MAIN-2021-49
|
refinedweb
| 4,773
| 62.17
|
Implementation details for Buffer class. More...
#include <BufferDetail.hh>
Implementation details for Buffer class.
Internally, BufferImpl keeps two lists of chunks, one list consists entirely of chunks containing data, and one list which contains chunks with free space.
Add unmanaged data to the buffer.
The buffer will not automatically free the data, but it will call the supplied function when the data is no longer referenced by the buffer (or copies of the buffer).
Copy data to a different buffer by copying the chunks.
It's a bit like extract, but without modifying the source buffer.
The number of chunks containing free space (note that an entire chunk may not be free).
Used for debugging.
Add enough free chunks to make the reservation size available.
Actual amount may be more (rounded up to next chunk).
An uninstantiable function, this is if boost::is_fundamental check fails, and will compile-time assert.
|
http://avro.apache.org/docs/1.4.0/api/cpp/html/classavro_1_1detail_1_1BufferImpl.html
|
CC-MAIN-2014-10
|
refinedweb
| 150
| 67.55
|
Objectives
Individual Assignment
1)Write an application that interfaces a user with an input &/or output device that you made.
Group Assignment
1)Compare as many tool options as possible.
Data Flow diagram
In the database creation menu you can choose a location that is closest to you, and select Next.
Next we need to create a database which will hold all our data. To do this, select the Realtime Database menu option on the top left, and you’ll be taken to the Realtime Database page. Select the Create Database button
Lastly you should see a new page with your new empty database, and we are all set!
You will be presented with the option to initialise your database in locked mode or test mode. Select test mode for now. The main difference is that in test mode a database access rule is placed allowing unauthorised access to your database for thirty days.
The second item we need to store is the project’s API key. To get you API key navigation to the project settings page by selecting the settings icon on the top right, and then selecting the Project settings menu.
Before, continue there are two items you need to copy and store for future use in our embedded application. The first item we need is our Realtime Database URL, which you can find by copying the URL of the Realtime Database page
You can create variables inside the real-time database.As triggering functions or just user input.
Now click on Service accounts and then Database secrets.On clicking on Database Secrets you will find a secret key, copy this key and save it in notepad, this is your firebase authorization key which you will need later.
TIP##
You can update the rules from 30 days test to read and write to true . so that you can use for longer time.
Add Firebase library and define database link and authentication data.
#include <FirebaseESP32.h
#define FIREBASE_HOST "Real time Database link which copied"
#define FIREBASE_AUTH "add Authentication key"
FirebaseData firebaseData;
This will call for authentication and get variable in the 'variable Name'
and store it as string.// i used sting since text box inputs from app is sring.
Firebase.begin(FIREBASE_HOST, FIREBASE_AUTH);
Firebase.reconnectWiFi(true);
if (Firebase.getString(firebaseData, "Variable Name"))
{
String i = (firebaseData.stringData());
For updating posting geting creating .databases and expand in Firebase Refer lib Finctions.
Its just drag and drop items, you can add text box ,buttons.specify the size ,color images and all to look good.
MiT Interface (Designer)
These are the components, buttons to tiger, text box to get data (token) from user and for config. the authentication and bucket URL.
Blocks Section.
On the other hand the block editor is the different environment in which we can visually can arrange the different color-coded blocks to place the logic of the app. We set the logic of app by arranging these color-blocks in the way we can solve the puzzle pieces.
Logic is: when user click a button, authentication is done and update the variable with a value or the value in the text box.
You can test the app in companion app or download the .apk file from Build and yeeaa its working the data gets updated in Firebase.
For the fulfillment or the function to be called can be done in both ways configure web hook or use inline editor.
Select the project.
Open CLI binary for windows.Login to account.
when selecting for project they will ask for functions files in local repo.
npm install -g firebase-tools
it will update the tools and lib. of firebase
You can edit the function (index.js)
Local clone is created
firebase deploy
it will push the function to Firebase and it willl be triggered as google assist. teachings as we done in Dilogflow.
Each block is called applets, you can create one by login IFTTT
URL: you can get this url from the Firebase witch represent the value in Firebase.this will update the value of variable to text field.I added inverted comma and back slashes since Mit app inventor text box data sent in same format so no crashes.
|
http://fab.academany.org/2021/labs/kochi/students/abel-tomy/week14(interface%20and%20application%20programming).html
|
CC-MAIN-2021-43
|
refinedweb
| 702
| 65.32
|
Hide Forgot
A bug in glibc causes name resolution to crash applications, sometimes. Firefox can be very crash-y due to this bug; I've had days where it's crashed 10-20 times.
Upstream bug is:
there's a patch there. Can we pull the patch into F16's glibc, if it looks sane? It'd be nice not to have this biting too many testers. Thanks!
Nominating as a final blocker, as live images would be permanently buggy if we shipped this way...
I can reproduce quite easily with firefox also using fc16 and rawhide GNU libc packages.
I see this all the time with midori. I suspect anything that does a lot of dns lookups would hit it.
I've made a scratch build with the patch from the upstream bug:
So far I have not hit this bug after updating to that package. :)
*** Bug 732857 has been marked as a duplicate of this bug. ***
For me, firefox crashes reproducibly when visiting
*** Bug 710697 has been marked as a duplicate of this bug. ***
*** Bug 734018 has been marked as a duplicate of this bug. ***
Kevin,
(In reply to comment #3)
> I've made a scratch build with the patch from the upstream bug:
>
Unfortunately the rpms are already gone from koji and
glibc is still not updated...
(Is there any way to keep scratch builds around longer?)
Is it possible you could post the srpm somewhere? :)
For the record, Andreas is working to reproduce and fix this bug, according to this post:
glibc-2.14.90-7 has been submitted as an update for Fedora 16.
Note the fix for this in glibc-2.14.90-7 is not the same fix that was proposed in . The fix in -7 seems to be this:
--- glibc-2.14-213-g3ba5751/resolv/res_query.c
+++ glibc-2.14.90-6/resolv/res_query.c
@@ -248,7 +248,7 @@ __libc_res_nquery(res_state statp,
&& *resplen2 > (int) sizeof (HEADER))
{
/* Special case of partial answer. */
- assert (hp != hp2);
+ assert (n == 0 || hp != hp2);
hp = hp2;
}
else if (answerp2 != NULL && *resplen2 < (int) sizeof (HEADER)
I'm not sure if this is something Andreas came up with, or if this is a better upstream fix.
Package glibc-2.14.90-7:
* should fix your issue,
* was pushed to the Fedora 16 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing glibc-2.14.90-7'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
Package glibc-2.14.90-8:
* should fix your issue,
* was pushed to the Fedora 16 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing glibc-2.14.90-8'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
glibc-2.14.90-8 has been pushed to the Fedora 16 stable repository. If problems still persist, please make note of it in this bug report.
I think I still have this bug on a just distro-synced Fedora 16.
Transmission-gtk crashed, and gave me the following:
res_query.c:258: __libc_res_nquery: Assertion `hp != hp2' failed.
That's with:
glibc-2.14.90-13.x86_64
I'm trying to reproduce it now to get a full backtrace (I didn't have ABRT running), but even the original torrent (which made transmission-gtk crash twice) doesn't reproduce the issue any more. :-/
Andreas, Adam mentioned in comment 10 that you were trying to reproduce the bug. Did you ever manage to find a reproducer, so that I could try it and hopefully get more information?.
--
Fedora Bugzappers volunteer triage team
(In reply to comment #17)
>.
Well, the DNS from my IP sucks so much, I've just had another crash with transmission-gtk. (perhaps they read your comment and thought they'd help us reproduce it? :)
This time I got the full traceback from ABRT, and it seems the bug was already reported, so ABRT just added this:
Perhaps this bug should be reopened?
The bug is still present. I can reproduce it with glibc-2.14.90-21 using any version of firefox visiting URLs from rae.es like this:
A simple and isolated testcase crashing on glibc:
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <unistd.h>
int main ()
{
struct addrinfo *res, hints;
memset(&hints, 0, sizeof(hints));
hints.ai_flags |= AI_ADDRCONFIG;
hints.ai_socktype = SOCK_STREAM;
getaddrinfo ("buscon.rae.es", NULL, &hints, &res);
return 0;
}
a: res_query.c:258: __libc_res_nquery: Assertion `hp != hp2' failed.
Program received signal SIGABRT, Aborted.
0x00110416 in __kernel_vsyscall ()
(gdb) bt
#0 0x00110416 in __kernel_vsyscall ()
#1 0x4fdee98f in raise () from /lib/libc.so.6
#2 0x4fdf02d5 in abort () from /lib/libc.so.6
#3 0x4fde76a5 in __assert_fail_base () from /lib/libc.so.6
#4 0x4fde7757 in __assert_fail () from /lib/libc.so.6
#5 0x4114f52c in __libc_res_nquery () from /lib/libresolv.so.2
#6 0x4114f71e in __libc_res_nquerydomain () from /lib/libresolv.so.2
#7 0x4114f9d3 in __libc_res_nsearch () from /lib/libresolv.so.2
#8 0x00124f6f in _nss_dns_gethostbyname4_r () from /lib/libnss_dns.so.2
#9 0x4fe9682b in gaih_inet () from /lib/libc.so.6
#10 0x4fe99d1d in getaddrinfo () from /lib/libc.so.6
#11 0x0804842c in main () at a.c:14
Using nice nameservers looks like does not trigger the bug.
I can definitively trigger it using these nameservers:
nameserver 87.216.1.65
nameserver 87.216.1.66
(gdb) bt
#0 0x00110416 in __kernel_vsyscall ()
#1 0x4fdee98f in __GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#2 0x4fdf02d5 in __GI_abort () at abort.c:91
#3 0x4fde76a5 in __assert_fail_base (fmt=0x4ff27be8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x4115ac18 "hp != hp2", file=0x4115ac02 "res_query.c", line=258, function=0x4115ac22 "__libc_res_nquery") at assert.c:94
#4 0x4fde7757 in __GI___assert_fail (assertion=0x4115ac18 "hp != hp2", file=0x4115ac02 "res_query.c", line=258, function=0x4115ac22 "__libc_res_nquery") at assert.c:103
#5 0x4114f52c in __libc_res_nquery (statp=0x4ff69c00, name=0xbfffd89b :258
#6 0x4114f71e in __libc_res_nquerydomain (statp=0x4ff69c00, name=<optimized out>, domain=0x4ff69c60 "",:578
#7 0x4114f9d3 in __libc_res_nsearch (statp=0x4ff69c00, name=0x8048514 :416
#8 0x00124f6f in _nss_dns_gethostbyname4_r (name=0x8048514 "buscon.rae.es", pat=0xbfffef2c, buffer=0xbfffea20 "\300\250\001\t", buflen=1024, errnop=0xbfffef30, herrnop=0xbfffef3c, ttlp=0x0) at nss_dns/dns-host.c:314
#9 0x4fe9682b in gaih_inet (name=0x8048514 "buscon.rae.es", service=<optimized out>, req=0xbffff0dc, pai=0xbffff084, naddrs=0xbffff094) at ../sysdeps/posix/getaddrinfo.c:842
#10 0x4fe99d1d in __GI_getaddrinfo (name=0x8048514 "buscon.rae.es", service=<optimized out>, hints=<optimized out>, pai=0xbffff0fc) at ../sysdeps/posix/getaddrinfo.c:2356
#11 0x0804842c in main () at a.c:14
*** Bug 768549 has been marked as a duplicate of this bug. ***
This assert is also back for me in glibc-2.14.90-18.x86_64. (mainly transmission)
I saw the problem around the same time as original reporter (August). The issue back then was fixed by updating to the -7 or -8 version from updates-testing (Comment 13/14).
What puzzles me is that if the fix was to change the assert:
- assert (hp != hp2);
+ assert (n == 0 || hp != hp2);
Then how come that the assert looks like this:
"__libc_res_nquery: Assertion `hp != hp2' failed"
I assume what happened was that Andreas' (working) fix was reverted and the upstream patch was applied instead.
In response to c#22/23, the reason the assert text isn't what you expect is because we're hitting a different assert.
There are two places in res_query which (prior to Andreas's change) which assert (hp != hp2). Andreas's change only modified one of those asserts to be assert (n == 0 || hp != hp2). The new failures are the other assert (which Andreas didn't change).
You can see this by matching up the line # in res_query with the backtrace provided in c20.
I haven't managed to get this to fail, but I've got it the testcase running in a loop while I read the code in the hopes that it'll fail.
Fernando, if you can trigger this with your testcase and get me the value of *resplen2 in frame #5 it would be helpful.
sure:
(gdb) p *resplen2
$6 = 1
I have a debuging session opened right now with the aborted testcase, so anything you need just ask here or on #fedora-devel IRC
What's your nick on IRC?
Presumably you're using the 87.216.1.65/66 nameservers? I can't reach either of them unfortunately.
Also in frame #5
*answer, anslen, *answerp, *answerp2, *nanserp2
(gdb) p *answer
$5 = 231 '\347'
(gdb) p anslen
$6 = 2048
(gdb) p *answerp
$7 = (u_char *) 0xbfffe1e0 "\347\016\201\200"
(gdb) p *answerp2
$8 = (u_char *) 0xbfffe1e0 "\347\016\201\200"
(gdb) p *nanswerp2
$9 = 2048
update from bug 768549
same exact error for
could it be related to a DNS query timeout?
host has address 78.46.77.246
;; connection timed out; no servers could be reached
;; connection timed out; no servers could be reached
host ocsp.entrust.net
ocsp.entrust.net has address 216.191.247.203
;; connection timed out; no servers could be reached
;; connection timed out; no servers could be reached
I spent some time remotely debugging with (thanks Fernando) just before Christmas. It's definitely a problem with how the server responds and glibc's use of the multiple result buffers. There were one or two more bits of state I needed to track down to fully understand the paths through the code. I haven't caught Fernando on IRC (we're offset by ~8hrs) so I haven't been able to dive back into it yet.
Eugene, if the DNS resolver you're using is publicly accessable, that would be a big help as I could debug it here without having to coordinate with Fernando to get access to his box (which uses a private DNS server which often triggers this problem). Can you please send me the contents of your /etc/resolv.conf file via private message (law@redhat.com)?
I see the problem when running azureus - my wlan cable/modem seems to get a bit angry and drops connections.
This happens with glibc-2.14.90-24.fc16.4.i686 when browsing with FireFox-10:
Program received signal SIGABRT, Aborted.
[Switching to Thread 0x9ebffb40 (LWP 8473)]
0x008c8416 in __kernel_vsyscall ()
Missing separate debuginfos, use: debuginfo-install libthai-0.1.14-4.fc15.i686
(gdb) bt
#0 0x008c8416 in __kernel_vsyscall ()
#1 0x49d4998f in __GI_raise (sig=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#2 0x49d4b2d5 in __GI_abort () at abort.c:91
#3 0x49d426a5 in __assert_fail_base (fmt=
0x49e82c48 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=
0x4a215c18 "hp != hp2", file=0x4a215c02 "res_query.c", line=258, function=
0x4a215c22 "__libc_res_nquery") at assert.c:94
#4 0x49d42757 in __GI___assert_fail (assertion=0x4a215c18 "hp != hp2", file=
0x4a215c02 "res_query.c", line=258, function=
0x4a215c22 "__libc_res_nquery") at assert.c:103
#5 0x4a20a52c in __libc_res_nquery (statp=0x9ebffdc4, name=
0x9ebfda1b "ecosia-de.tumblr.com:258
#6 0x4a20a71e in __libc_res_nquerydomain (statp=0x9ebffdc4,
name=<optimized out>, domain=0x9ebffe24 :578
#7 0x4a20a9d3 in __libc_res_nsearch (statp=0x9ebffdc4, name=
0xa3b325bc "ecosia-de.tumblr.com", class=1, type=62321, answer=
0x9ebfe310 "ݗ\201\203", anslen=2048, answerp=0x9ebfeb30, answerp2=
---Type <return> to continue, or q <return> to quit---
0x9ebfeb34, nanswerp2=0x9ebfeb38, resplen2=0x9ebfeb3c) at res_query.c:416
#8 0x005f2f6f in _nss_dns_gethostbyname4_r (name=
0xa3b325bc "ecosia-de.tumblr.com", pat=0x9ebff0ac, buffer=0x9ebfeba0 "",
buflen=1024, errnop=0x9ebff0b0, herrnop=0x9ebff0bc, ttlp=0x0)
at nss_dns/dns-host.c:314
#9 0x49df189b in gaih_inet (name=0xa3b325bc "ecosia-de.tumblr.com",
service=<optimized out>, req=0x9ebff26c, pai=0x9ebff204, naddrs=0x9ebff214)
at ../sysdeps/posix/getaddrinfo.c:842
#10 0x49df4d8d in __GI_getaddrinfo (name=0xa3b325bc "ecosia-de.tumblr.com",
service=<optimized out>, hints=<optimized out>, pai=0x9ebff28c)
at ../sysdeps/posix/getaddrinfo.c:2356
#11 0x0059f0cb in PR_GetAddrInfoByName ()
same happens for azureus - I always thought java had this specific problem, but it seems its deeper down in gblibc
Clemens, what are the contents of your resolv.conf? I'm still looking for a public dns server that exhibits the problem behaviour so I can debug locally rather than use Fernando's machine on the other side of the pond. Ideally I'll have time to look at this again later this week.
Hi Jeff,
/etc/resolv.conf has the following two entries:
nameserver 212.186.211.21
nameserver 195.34.133.21
I also have a packet-dump which was created while firefox crashed showing lots of ugly stuff going on (tcp dups, retransmits, ..), when running azureus. If you are interested, I could upload it somewhere?
most likely you will not be able to reproduce the problem by just using the nameserver provided above.
I *only* get the crashes when running Azureus, which causes an awful lot of TCP retransmits and duplicates and causes even the wlan-driver to get a bit "angry" from time to time.
If required I could provide you with my machine, and create a remote account for you.
I pulled in a patch from Debian which resolves this issue into rawhide & f17.
The patch was submitted upstream some time ago, but Uli hasn't included it into the upstream sources yet.
Thanks :)
Any chance this will end up in Fedora-16 too? (as its only a bug-fix and not a new feature)
Not planning to right now. My time is quite limited and my focus is turning towards F17 issues.
Thats unfourunate - as it means I have to live wth Firefox crashing a dozen times a day while running azureus.
If glibc just wouldn't be maintained the way it is :(
You could always update to the F17 glibc which includes the fix.
Upstream glibc maintenance is a serious issue which increases the maintenance burden for all the downstream consumers including Fedora & Debian. I don't know what the long term solution for upstream glibc will be; however, I have been in contact with some downstream consumers to see where we can work together to avoid duplicated efforts.
hmmm...
i've had Vuze 4.7.0.2 crashing @ random intervals with
"java: res_query.c:258: __libc_res_nquery: Assertion `hp != hp2' failed."
could be related? Fedora 16, glibc 2.14.90-24.fc16.6 64 bit
Jap, its exactly the same bug - hopefully fixed in Fedora 17, it should be solveable by installing a glibc shipped with the Alpha releases of fedora 17.
Its really sad that will be no backport of the fix to Fedora 16.
It's definitely fixed in F17. Unfortunately I simply don't have the time to backport changes into F16 and issue new updates.
Unfortunately, it may not be possible any more to use the F17 glibc on F16 because of the changes to move everything into /usr. I haven't tried an rpm --force.
(In reply to comment #42)
> Its really sad that will be no backport of the fix to Fedora 16.
+1, a backport for F16 (which is supposed to be supported for other 8 months!) would be very much appreciated, my Firefox crashes 2-3 times per day!
glibc-2.14.90-24.fc16.7 has been submitted as an update for Fedora 16.
glibc-2.14.90-24.fc16.7 has been pushed to the Fedora 16 stable repository. If problems still persist, please make note of it in this bug report.
|
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=730856
|
CC-MAIN-2020-05
|
refinedweb
| 2,568
| 67.65
|
Time for a new release. The 0.9.8 release is pretty much identical to 0.9.3, except for a revised "iterparse" mechanism: for event, elem in iterparse(source): ... By default, iterparse now only returns "end" events (issued when an element has been completed, and all child elements are available). This speeds things up a bit, and simplifies the event-handling code. An example: for event, elem in iterparse(source): if elem.tag == "title": print "document title is", repr(elem.text) break To request other events, including extended information about namespaces, use the "events" option (see the CHANGES document for details). Like the rest of cElementTree, the iterparse mechanism is fast. On my test machine, it's over four times faster than xml.sax, 2.5 times faster than pyexpat, and even a bit faster than my own sgmlop. For more information on this library, including download instructions, detailed benchmark figures, and more, see: enjoy /F
|
https://mail.python.org/pipermail/xml-sig/2005-January/010834.html
|
CC-MAIN-2016-36
|
refinedweb
| 157
| 67.76
|
I'm trying to understand the use of
yaml.load()
import yaml
document = """
a: 1
b:
c: 3
d: 4
"""
print yaml.dump(yaml.load(document), default_flow_style=False)
AttributeError: 'module' object has no attribute 'dump'
You called your example
yaml.py and as such your test program is imported with the
import yaml statement, and it doesn't have a
dump routine.
Just rename your
yaml.py to something like
test_yaml.py.
You should also use:
import sys yaml.dump(yaml.load(document, sys.stdout, default_flow_style=False)
as not providing a stream as the second parameter to
dump() causes the output to first be written to a
StringIO() object, then to be retrieved by
.getvalue() on that object, and then written out to
sys.stdout. It is faster to do the latter directly.
|
https://codedump.io/share/gZguRDCQP8Tw/1/trying-to-use-yamlload-in-python-fails
|
CC-MAIN-2017-09
|
refinedweb
| 133
| 59.8
|
Before you start, I highly recommend you read Pavel Zolnikov's article entitled "Extending Explorer with Band Objects using .NET & Windows Forms" first.
Pavel wrote a fantastic article explaining the ins & outs of using COM Interop to write your own customizable band objects for Explorer using .NET 2003. Without that article, this one wouldn't have come about, so kudos to Pavel.
One of the niceties about the .NET 2.0 environment is the rich suite of form controls that have been included for form design. While the 2003 environment was great and all, it was missing a lot of the funky controls such as tool strips and dropdown buttons that we've come to expect as standard in a nice Explorer bar UI. My goal was to design a nice neat tool bar solution so that I had several shortcut links to the sites and systems I used most often, and also to provide 'at-a-glance' information by displaying data in a label which updated on a regular basis.
Porting Pavel's 2003 BandObjectLib code to 2005 was a relatively painless experience, so I'll just skim through a few of the minor details:
[assembly: ]
SHDocVw
System.Drawing
System.Windows.Forms
cd $(ProjectDir)bin\Debug
"C:\Program Files\Microsoft Visual Studio 8\
SDK\v2.0\Bin\gacutil" /if CustomToolbar.dll
"C:\Program Files\Microsoft Visual Studio 8\
SDK\v2.0\Bin\gacutil" /if Interop.SHDocVw.dll
Note * If you plan on using this software yourself, be aware that Strongly Named Libraries in the GAC are treated as being distinct and separate if they have differing version numbers, so I'd recommend keeping the Assembly Version number in the lib as some static value. E.g.:
[assembly: AssemblyVersion("1.0.0.0")]
So we're ready to build our first toolbar. Setting up the project is easy enough. In the attached code, I've just created it as an additional project within the same solution. It's a class library project. It requires direct references to the strong named DLLs in the bin folder of our BandObjectLib project which have already been installed in the GAC. And it needs a strong named key and a static Assembly Version Number.
Since the BandObject class is essentially a beefed up derived class of UserControl, we can simply add a UserControl to our project and then alter the code-behind so that our control extends a BandObject instead. In this example, I'm creating a horizontal Explorer Toolbar. Since this control will be exposed through COM, we need to specify a GUID attribute to uniquely identify the class. The GUID attribute is part of the System.Runtime.InteropServices namespace.
BandObject
UserControl
System.Runtime.InteropServices
using BandObjectsLib;
using System.Runtime.InteropServices;
and
[Guid("AE07101B-46D4-4a98-AF68-0333EA26E113")]
[BandObject("My Toolbar", BandObjectStyle.Horizontal
| BandObjectStyle.ExplorerToolbar
| BandObjectStyle.TaskbarToolBar, HelpText = "My First Toolbar")]
public class MyToolBar : BandObject
{ ...
You have all the facilities of a standard UserControl at your finger tips through the designer canvas, so feel free to add anything you want from simple buttons to mini-browsers to completely separate forms which popup on a click event. In my example, I've added a ToolStrip and populated it with some standard toolstrip controls: buttons, dropdown buttons, labels, textboxes, separators, and progress bars.
ToolStrip
To make the bar look a little more aesthetically pleasing, I've embedded some PNG images (thanks to's silk icon collection) to an Embedded Resource File in my project and then set the button images. Simply set your toolstrip buttons to display "ImageAndText" or "Image", and specify an image from your resx file. When you compile your toolbar, all the image files will be embedded in the DLL, so there's no need for any installation directories and file bundles. Resource files are very convenient for embedding all sorts of dynamic resources or for making your toolbar multi-lingual.
this.tsbtn1.Image = global::BandObjectsExample.Resources.redcircle;
this.tsbtn1.ImageAlign = System.Drawing.ContentAlignment.MiddleLeft;
this.tsbtn1.ImageTransparentColor = System.Drawing.Color.Magenta;
In my example, I've created a couple of sample buttons to pop open some websites using your system's default browser.
private void OpenWebPage(string url)
{
System.Diagnostics.Process process = new System.Diagnostics.Process();
process.StartInfo.FileName = url;
process.Start();
}
As an example of a status polling facility, I've created a quick and dirty routine to poll my Winamp web interface for the currently playing track and to display it on a label at the end of the strip. This is performed on a timer which ticks every 30 seconds. Be careful though about having multiple instances of your toolbar (e.g., in multiple browsers) as this might have an adverse effect on your system if the task being performed on the timer occurs very often or is very processor intensive. For my Winamp ticker, I only ever have one instance running on my taskbar beside the system clock.
Once you've finished designing your toolbar in the IDE designer, then it's time to build it and take it for a test drive. You'll need to add a post build job to this project also. The first task is to install your toolbar DLL in the Global Assembly Cache. Since it's going to be used through COM, you'll also need to register your assembly using the Regasm utility contained in the framework installation folder.
cd $(ProjectDir)bin\Debug
"C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" /if MyToolbar.dll
"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\regasm" MyToolbar.dll
Note * By default, .NET 2.0 assemblies are set to be invisible to COM. In your AssemblyInfo.cs file, you'll find an attribute called [Assembly:ComVisible]. This attribute states whether or not the types in your .NET assembly should be exposed to COM. You have to set this value to true, otherwise the Regasm program will not be able to see your toolbar and the registration will fail.
[Assembly:ComVisible]
true
// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(true)]
Once you've successfully built and registered your toolbar, it should appear on the right click menu of your taskbar and the toolbar context menu in the Internet Explorer. It's worth pointing out that Explorer caches COM objects when they are first loaded, so after doing a rebuild, you might not necessarily see the updates in your toolbar. There are a couple of ways of getting around this such as changing your folder settings to launch folders in new Explorer instances, or to kill the Explorer exe (not recommended :) ) from your Task Manager.
The sky really is the limit for these band object controls. In my example, I've only shown the ToolStrip control, populated with some buttons and other simple controls. I've added some additional functionalities such as a ContextMenu and the timer control, but there's nothing stopping you from adding any control available. Post them to CodeProject if you come up with a good one :)
ContextMenu
WebBrowserControl
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here);
try
{
DrawThemeParentBackground (this.Handle, hdc, ref rec);
e.Graphics.ReleaseHdc (hdc);
}
catch (Exception)
{
e.Graphics.ReleaseHdc (hdc);
base.OnPaintBackground (e);
}
}
else
{
base.OnPaintBackground (e);
}
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/14141/Band-Objects-NET-Redux
|
CC-MAIN-2018-26
|
refinedweb
| 1,303
| 55.03
|
I large projects, I kept feeling like I needed another layer of abstraction. Something that would help organize all these simple, tiny functions into a cohesive whole. Something that would help guide the structure of the app as new features were added. That’s roughly where my head was at when I began to read about hexagonal architecture.
Hexagons to the Rescue!
I’ve heard and read about hexagonal architecture several times over the years, sometimes by its other name “Ports and Adapters.” I also think that the concept of “Functional Core, Imperative Shell” is closely related. The concept seemed like it might provide the next layer of structure I was seeking.
Recently, I started on a project that seemed like a particularly good candidate for hexagonal architecture. It featured a handful of unrelated dependencies. Additionally, it was both large enough to benefit from hexagonal architecture and small enough to provide a good testing ground. In this post, I’ll discuss my experience and results.
Case Study
In my case, I’m working on a cross-platform mobile app using Xamarin that tracks data from a Bluetooth device. You can think of it like the app for FitBit, though in my case, the actual device is quite different.
Basically, the app has to do three things:
- Connect to the device via Bluetooth and gather data from messages initiated by the device
- Track the data from the device in an internal database
- Render a visualization of recent data to the user
Here’s how I structured the app, using hexagonal architecture to guide the organization:
- Inner layer: Common data structures and core business rules.
- Middle layer: Submodules for Bluetooth, data persistence, and UI rendering and events. Each submodule can utilize common data structures, but they don’t talk to each other.
- Outer layer: Thin layer for handling events from the middle layer, generally by instrumenting the use of one or more submodules.
Let’s examine each layer in detail:
Inner layer: common data structures and core business rules
In the innermost layer, I define the core data structures to be used by the rest of the app. These are generally tightly coupled to the business domain. I’m using a Redux-like pattern in my app, so I also define the structure of the global state as well as my actions and reducers. Finally, any business rules that can be severed from external dependencies can live here, as well.
All those rules are defined in a functional way, free of side effects. Consequently, this layer is trivial to reason about, and trivial to test.
Middle layer: submodules for Bluetooth, data persistence, and rendering the view
The common data structures act as a lingua franca for the layers in the middle. Each of these submodules exposes one main interface to consumers, which may include event hooks for actions triggered from within the module. For example, the Bluetooth interface may provide a hook for when a new Bluetooth message is received, or the rendering/UI interface may provide a semantically meaningful hook when a certain UI element is clicked.
Each interface provides a single point of access to the submodule and hides implementation specifics. For example, to store our data, we are using an internal SQLite database, but the interface does not expose the database or any data types which are specific to the database (e.g. ORM classes).
When defining the interface, submodules use either the common data structures or specific types defined by the submodule. Because of this, changes to implementation do not require a change to interface. For example, we could change our persistence layer, even to the point of using HTTP to persist to an external back end, and the interface could stay the same. Compare and contrast this with, for example, trying to excise the use of Active Record methods from a typical Rails project.
Central to this pattern is the notion that an external dependency should be contained to one module. If, for instance, I felt the need to add the SQLite library to the Bluetooth module, that would be a strong indicator of a problem in my architectural design.
Because each submodule only deals with one external dependency, this layer is fairly straightforward to reason about and to test, if slightly less so than the inner layer.
Outer layer: thin layer for handling events and instrumenting submodules
So what happens when I need to perform an operation that involves two submodules? That’s where the outer layer comes in. This layer, which I sometimes call the “sagas” layer in honor of redux-saga, handles events triggered by users or by external dependencies, generally by having a short function dispatch to one or more submodules in the inner layer.
Say, for example, that when an event comes in, we want to save the data and then send an ACK back to the Bluetooth layer. That might look something like:
public class MyAppEvents { /* ... * / public async Task HandleNewData(EventData data) { await _persistence.SaveData(data); await _bluetooth.SendAck(); } /* ... */ }
Generally, I try to keep code here as short as possible. Otherwise, it will devolve back into the same inter-connected mess I am trying to move away from. That said, even if the balance isn’t always perfect, I still have a clear list of the main events in the app and how they are handled, and I can write straightforward system or integration tests for each event.
Considerations
There are a few extra hexagonal architecture considerations I’d like to discuss that don’t quite fit into any one layer.
Detailed input and return types
If you are in a statically typed language, I can’t stress enough the value of defining types whenever you have a specific set of data. That way, your consumers don’t have to guess what part of a return type or an input type is relevant to the function–everything is relevant. Additionally, if your language has algebraic types–or you don’t mind creating them longhand–you can have a function explicitly indicate a list of outcomes.
I think this is much more helpful than relying on try/catch for anything off the happy path, or relying on the consumer to know what your bool return type “means.”
Using submodules to organize around dependencies
I think this is what was missing in previous projects, where I mis-applied the single-responsibility principle to require that each class should have one or two functions, max. I still stand by that opinion for code inside the submodules, or anything in the inner layer. However, I’ve come to the conclusion that the “single responsibility” of a top-level submodule class is to provide a single point of contact between a submodule and the rest of the app.
That said, if you feel you need 40 or 50 functions to provide your “single point of contact,” it still seems like a smell to me. There’s a good chance that a better division of submodules could streamline their responsibilities, and thus the number of functions in each module.
For me, a useful question is: “Do these functions feel like ‘siblings’ in terms of complexity?” That is, are some much more high-level than others? Are some much more generic than others? If so, you may be missing a level of abstraction.
Testing
I’ve stated above how each layer of the hexagonal architecture lends itself to a specific type of test, but it is also worth noting that you could write “unit” tests at high layers, using fakes or mocks for the lower layers. This can be a good fit sometimes if, for example, you want to test a series of side effects in isolation from some complicated business rules.
Scaling
I would say my app is small-to-medium in size. Accordingly, I really can’t offer great guarantees as to how well this approach scales. That said, I love how when I go to add a new feature, I know exactly where each part should go. And while submodules do grow over time, they don’t lose their “focus” quite as easily as code that is constantly intertwining external dependencies. I think I’m starting to get a feel for the “pit of success,” and I like it.
Setting up your project structure for success
This project uses standard .NET organization, including .sln, .csproj, and NuGet for dependencies. While sometimes the .NET build system can seem large and unwieldy, in this case, I actually found it helpful.
I split up my submodules into their own .csproj projects, which requires me to be very explicit about a) external dependencies and b) internal dependencies of one project on another. Once the basic structure is defined, I can’t violate the hexagonal architecture unless I add a project reference or a NuGet package inappropriately. That’s much easier to notice than if everything were in one .csproj, where I am one “using” statement away from breaking the architecture.
Conclusions
At least for the time being, I’m sold on this architectural pattern. It seems to provide smart guardrails as software grows in complexity. I want to try it out on an API server project next, as I feel like the standard back end frameworks provide some challenges in terms of thinking hexagonally. Plus, the overall structure of a back end seems to me to go down and up (top-level HTTP handler -> process business rules & persistence -> return data to HTTP handler), rather than up and down (submodule event -> outer layer handler -> submodule function(s)). So I’m interested to see what adapting the pattern for a server API will be like.
And just in general, if you have any stories of how you’ve implemented hexagonal architecture, I would love to hear them.
|
https://spin.atomicobject.com/2017/11/21/hexagonal-architecture/
|
CC-MAIN-2019-09
|
refinedweb
| 1,636
| 60.55
|
Using the pandoc API
Pandoc can be used as a Haskell library, to write your own conversion tools or power a web application. This document offers an introduction to using the pandoc API.
Detailed API documentation at the level of individual functions and types is available at.
Pandoc’s architecture
Pandoc is structured as a set of readers, which translate various input formats into an abstract syntax tree (the Pandoc AST) representing a structured document, and a set of writers, which render this AST into various input formats. Pictorially:
[input format] ==reader==> [Pandoc AST] ==writer==> [output format]
This architecture allows pandoc to perform M × N conversions with M readers and N writers.
The Pandoc AST is defined in the pandoc-types package. You should start by looking at the Haddock documentation for Text.Pandoc.Definition. As you’ll see, a
Pandoc is composed of some metadata and a list of
Blocks. There are various kinds of
Block, including
Para (paragraph),
Header (section heading), and
BlockQuote. Some of the
Blocks (like
BlockQuote) contain lists of
Blocks, while others (like
Para) contain lists of
Inlines, and still others (like
CodeBlock) contain plain text or nothing.
Inlines are the basic elements of paragraphs. The distinction between
Block and
Inline in the type system makes it impossible to represent, for example, a link (
Inline) whose link text is a block quote (
Block). This expressive limitation is mostly a help rather than a hindrance, since many of the formats pandoc supports have similar limitations.
The best way to explore the pandoc AST is to use
pandoc -t native, which will display the AST corresponding to some Markdown input:
% echo -e "1. *foo*\n2. bar" | pandoc -t native [OrderedList (1,Decimal,Period) [[Plain [Emph [Str "foo"]]] ,[Plain [Str "bar"]]]]
A simple example
Here is a simple example of the use of a pandoc reader and writer to perform a conversion:
import Text.Pandoc import qualified Data.Text as T import qualified Data.Text.IO as TIO main :: IO () main = do result <- runIO $ do doc <- readMarkdown def (T.pack "[testing](url)") writeRST def doc rst <- handleError result TIO.putStrLn rst
Some notes:
The first part constructs a conversion pipeline: the input string is passed to
readMarkdown, and the resulting Pandoc AST (
doc) is then rendered by
writeRST. The conversion pipeline is “run” by
runIO—more on that below.
resulthas the type
Either PandocError Text. We could pattern-match on this manually, but it’s simpler in this context to use the
handleErrorfunction from Text.Pandoc.Error. This exits with an appropriate error code and message if the value is a
Left, and returns the
Textif the value is a
Right.
The PandocMonad class
Let’s look at the types of
readMarkdown and
writeRST:
readMarkdown :: PandocMonad m => ReaderOptions -> Text -> m Pandoc writeRST :: PandocMonad m => WriterOptions -> Pandoc -> m Text
The
PandocMonad m => part is a typeclass constraint. It says that
readMarkdown and
writeRST define computations that can be used in any instance of the
PandocMonad type class.
PandocMonad is defined in the module Text.Pandoc.Class.
Two instances of
PandocMonad are provided:
PandocIO and
PandocPure. The difference is that computations run in
PandocIO are allowed to do IO (for example, read a file), while computations in
PandocPure are free of any side effects.
PandocPure is useful for sandboxed environments, when you want to prevent users from doing anything malicious. To run the conversion in
PandocIO, use
runIO (as above). To run it in
PandocPure, use
runPure.
As you can see from the Haddocks, Text.Pandoc.Class exports many auxiliary functions that can be used in any instance of
PandocMonad. For example:
-- | Get the verbosity level. getVerbosity :: PandocMonad m => m Verbosity -- | Set the verbosity level. setVerbosity :: PandocMonad m => Verbosity -> m () -- Get the accomulated log messages (in temporal order). getLog :: PandocMonad m => m [LogMessage] getLog = reverse <$> getsCommonState stLog -- | Log a message using 'logOutput'. Note that 'logOutput' is -- called only if the verbosity level exceeds the level of the -- message, but the message is added to the list of log messages -- that will be retrieved by 'getLog' regardless of its verbosity level. report :: PandocMonad m => LogMessage -> m () -- | Fetch an image or other item from the local filesystem or the net. -- Returns raw content and maybe mime type. fetchItem :: PandocMonad m => String -> m (B.ByteString, Maybe MimeType) -- Set the resource path searched by 'fetchItem'. setResourcePath :: PandocMonad m => [FilePath] -> m ()
If we wanted more verbose informational messages during the conversion we defined in the previous section, we could do this:
result <- runIO $ do setVerbosity INFO doc <- readMarkdown def (T.pack "[testing](url)") writeRST def doc
Note that
PandocIO is an instance of
MonadIO, so you can use
liftIO to perform arbitrary IO operations inside a pandoc conversion chain.
Options
The first argument of each reader or writer is for options controlling the behavior of the reader or writer:
ReaderOptions for readers and
WriterOptions for writers. These are defined in Text.Pandoc.Options. It is a good idea to study these options to see what can be adjusted.
def (from Data.Default) denotes a default value for each kind of option. (You can also use
defaultWriterOptions and
defaultReaderOptions.) Generally you’ll want to use the defaults and modify them only when needed, for example:
Some particularly important options to know about:
writerTemplate: By default, this is
Nothing, which means that a document fragment will be produced. If you want a full document, you need to specify
Just template, where
templateis a String containing the template’s contents (not the path).
readerExtensionsand
writerExtensions: These specify the extensions to be used in parsing and rendering. Extensions are defined in Text.Pandoc.Extensions.
Builder
Sometimes it’s useful to construct a Pandoc document programmatically. To make this easier we provide the module Text.Pandoc.Builder
pandoc-types.
Because concatenating lists is slow, we use special types
Inlines and
Blocks that wrap a
Sequence of
Inline and
Block elements. These are instances of the Monoid typeclass and can easily be concatenated:
import Text.Pandoc.Builder mydoc :: Pandoc mydoc = doc $ header 1 (text "Hello!") <> para (emph (text "hello world") <> text ".") main :: IO () main = print mydoc
If you use the
OverloadedStrings pragma, you can simplify this further:
Here’s a more realistic example. Suppose your boss says: write me a letter in Word listing all the filling stations in Chicago that take the Voyager card. You find some JSON data in this format (
fuel.json):
[ { "state" : "IL", "city" : "Chicago", "fuel_type_code" : "CNG", "zip" : "60607", "station_name" : "Clean Energy - Yellow Cab", "cards_accepted" : "A D M V Voyager Wright_Exp CleanEnergy", "street_address" : "540 W Grenshaw" }, ...
And then use aeson and pandoc to parse the JSON and create the Word document:
{-# LANGUAGE OverloadedStrings #-} import Text.Pandoc.Builder import Text.Pandoc import Data.Monoid ((<>), mempty, mconcat) import Data.Aeson import Control.Applicative import Control.Monad (mzero) import qualified Data.ByteString.Lazy as BL import qualified Data.Text as T import Data.List (intersperse) data Station = Station{ address :: String , name :: String , cardsAccepted :: [String] } deriving Show instance FromJSON Station where parseJSON (Object v) = Station <$> v .: "street_address" <*> v .: "station_name" <*> (words <$> (v .:? "cards_accepted" .!= "")) parseJSON _ = mzero createLetter :: [Station] -> Pandoc createLetter stations = doc $ para "Dear Boss:" <> para "Here are the CNG stations that accept Voyager cards:" <> simpleTable [plain "Station", plain "Address", plain "Cards accepted"] (map stationToRow stations) <> para "Your loyal servant," <> plain (image "JohnHancock.png" "" mempty) where stationToRow station = [ plain (text $ name station) , plain (text $ address station) , plain (mconcat $ intersperse linebreak $ map text $ cardsAccepted station) ] main :: IO () main = do json <- BL.readFile "fuel.json" let letter = case decode json of Just stations -> createLetter [s | s <- stations, "Voyager" `elem` cardsAccepted s] Nothing -> error "Could not decode JSON" docx <- runIO (writeDocx def letter) >>= handleError BL.writeFile "letter.docx" docx putStrLn "Created letter.docx"
Voila! You’ve written the letter without using Word and without looking at the data.
Data files
Pandoc has a number of data files, which can be found in the
data/ subdirectory of the repository. These are installed with pandoc (or, if pandoc was compiled with the
embed_data_files flag, they are embedded in the binary). You can retrieve data files using
readDataFile from Text.Pandoc.Class.
readDataFile will first look for the file in the “user data directory” (
setUserDataDir,
getUserDataDir), and if it is not found there, it will return the default installed with the system. To force the use of the default,
setUserDataDir Nothing.
Templates
Pandoc has its own template system, described in the User’s Guide. To retrieve the default template for a system, use
getDefaultTemplate from Text.Pandoc.Templates. Note that this looks first in the
templates subdirectory of the user data directory, allowing users to override the system defaults. If you want to disable this behavior, use
setUserDataDir Nothing.
To render a template, use
renderTemplate', which takes two arguments, a template (String) and a context (any instance of ToJSON). If you want to create a context from the metadata part of a Pandoc document, use
metaToJSON' from Text.Pandoc.Writers.Shared. If you also want to incorporate values from variables, use
metaToJSON instead, and make sure
writerVariables is set in
WriterOptions.
Handling errors and warnings
runIO and
runPure return an
Either PandocError a. All errors raised in running a
PandocMonad computation will be trapped and returned as a
Left value, so they can be handled by the calling program. To see the constructors for
PandocError, see the documentation for Text.Pandoc.Error.
To raise a
PandocError from inside a
PandocMonad computation, use
throwError.
In addition to errors, which stop execution of the conversion pipeline, one can generate informational messages. Use
report from Text.Pandoc.Class to issue a
LogMessage. For a list of cosntructors for
LogMessage, see Text.Pandoc.Logging. Note that each type of log message is associated with a verbosity level. The verbosity level (
setVerbosity/
getVerbosity) determines whether the report will be printed to stderr (when running in
PandocIO), but regardless of verbosity level, all reported messages are stored internally and may be retrieved using
getLog.
Walking the AST
It is often useful to walk the Pandoc AST either to extract information (e.g., what are all the URLs linked to in this document?, do all the code samples compile?) or to transform a document (e.g., increase the level of every section header, remove emphasis, or replace specially marked code blocks with images). To make this easier and more efficient,
pandoc-types includes a module Text.Pandoc.Walk.
Here’s the essential documentation:
class Walkable a b where -- | @walk f x@ walks the structure @x@ (bottom up) and replaces every -- occurrence of an @a@ with the result of applying @f@ to it. walk :: (a -> a) -> b -> b walk f = runIdentity . walkM (return . f) -- | A monadic version of 'walk'. walkM :: (Monad m, Functor m) => (a -> m a) -> b -> m b -- | @query f x@ walks the structure @x@ (bottom up) and applies @f@ -- to every @a@, appending the results. query :: Monoid c => (a -> c) -> b -> c
Walkable instances are defined for most combinations of Pandoc types. For example, the
Walkable Inline Block instance allows you to take a function
Inline -> Inline and apply it over every inline in a
Block. And
Walkable [Inline] Pandoc allows you to take a function
[Inline] -> [Inline] and apply it over every maximal list of
Inlines in a
Pandoc.
Here’s a simple example of a function that promotes the levels of headers:
promoteHeaderLevels :: Pandoc -> Pandoc promoteHeaderLevels = walk promote where promote :: Block -> Block promote (Header lev attr ils) = Header (lev + 1) attr ils promote x = x
walkM is a monadic version of
walk; it can be used, for example, when you need your transformations to perform IO operations, use PandocMonad operations, or update internal state. Here’s an example using the State monad to add unique identifiers to each code block:
addCodeIdentifiers :: Pandoc -> Pandoc addCodeIdentifiers doc = evalState (walkM addCodeId doc) 1 where addCodeId :: Block -> State Int Block addCodeId (CodeBlock (_,classes,kvs) code) = do curId <- get put (curId + 1) return $ CodeBlock (show curId,classes,kvs) code addCodeId x = return x
query is used to collect information from the AST. Its argument is a query function that produces a result in some monoidal type (e.g. a list). The results are concatenated together. Here’s an example that returns a list of the URLs linked to in a document:
listURLs :: Pandoc -> [String] listURLs = query urls where urls (Link _ _ (src, _)) = [src] urls _ = []
Creating a front-end
All of the functionality of the command-line program
pandoc has been abstracted out in
convertWithOpts in the module Text.Pandoc.App. Creating a GUI front-end for pandoc is thus just a matter of populating the
Opts structure and calling this function.
Notes on using pandoc in web applications
Pandoc’s parsers can exhibit pathological behavior on some inputs. So it is always a good idea to wrap uses of pandoc in a timeout function (e.g.
System.Timeout.timeoutfrom
base) to prevent DOS attacks.
If pandoc generates HTML from untrusted user input, it is always a good idea to filter the generated HTML through a sanitizer (such as
xss-sanitize) to avoid security problems.
Using
runPurerather than
runIOwill ensure that pandoc’s functions perform no IO operations (e.g. writing files). If some resources need to be made available, a “fake environment” is provided inside the state available to
runPure(see
PureStateand its associated functions in Text.Pandoc.Class). It is also possible to write a custom instance of
PandocMonadthat, for example, makes wiki resources available as files in the fake environment, while isolating pandoc from the rest of the system.
|
http://pandoc.org/using-the-pandoc-api.html
|
CC-MAIN-2018-39
|
refinedweb
| 2,249
| 54.63
|
Migrate from AppFabric Caching to Azure In-Role Cache
Updated: February 13, 2015
This topic describes how to migrate from Microsoft AppFabric 1.1 for Windows Server to Microsoft Azure Cache. This type of caching migration might take place when you move an on-premises application to Azure.
AppFabric supports on-premises cache clusters the use your own servers and network infrastructure. The move to Azure is facilitated by the fact that most of the features and programming model of AppFabric are shared with Microsoft Azure Cache.
Before you migrate your cache-enabled application to the cloud, first review the differences between AppFabric and Microsoft Azure Cache. If you require a feature, such as write-through, that is not available in Microsoft Azure Cache, you must redesign that part of your solution in order to successfully move to Azure.
All of the caches and their relevant settings must be recreated on a cache-enabled Azure role. The first step is to analyze the AppFabric cache cluster to understand the current on-premises configuration.
On the AppFabric cache cluster, open a Caching Administration Windows PowerShell command prompt.
Run the Get-Cache command without any parameters. This lists the named caches.
For each cache listed, run the Get-CacheConfig command. Pass the cache name as an argument to this command. Record the configuration settings for each cache. The following shows an example of this output.
PS C:\Windows\system32> Get-CacheConfig TestCache CacheName : TestCache TimeToLive : 20 mins CacheType : Partitioned Secondaries : 0 MinSecondaries : 0 IsExpirable : True EvictionType : None NotificationsEnabled : True WriteBehindEnabled : False WriteBehindInterval : 300 WriteBehindRetryInterval : 60 WriteBehindRetryCount : -1 ReadThroughEnabled : False ProviderType : ProviderSettings : {}
Run the Get-CacheHost command to see a list of cache hosts in the cache cluster.
For each cache host, run the Get-CacheHostConfig command. Pass the required arguments, the name of the cache host and the caching port (typically 22233). Record the
Sizeparameter for each cache host.
Add the
Sizevalues for all cache hosts to determine the overall size of the cache cluster.
In Visual Studio create a cloud service or open an existing cloud service. Add a Cache Worker Role to the cloud service. For more information, see Hosting Azure In-Role Cache on Dedicated Roles. This role will provide caching capabilities for the entire cloud service. The following steps describe how to recreate the named caches.
In Visual Studio, go to the Solution Explorer window.
In the Roles folder, double-click the role that hosts caching.
In the role properties dialog, select the Caching tab.
Under Named Cache Settings, first, modify the default cache to match the settings for the default cache in the AppFabric cache cluster. Then use the Add Name Cache link to add any additional caches required by your solution. The following screen shot shows several configured named caches.
Use the to create or re-use a storage account. That account should be used in the storage account field on the Caching tab for deployments to the Cloud.
The following table correlates the output from Get-CacheConfig to the settings in the caching window.
The cache cluster size can be configured by understanding the relationship between the virtual machine size and the number of running instances of this role. For more information, see Capacity Planning Considerations for Azure In-Role Cache.
The final step is to move any application code to the cloud service projects. Note that the namespace and many of the APIs remain the same. The following steps provide migration guidance for each project that requires caching.
In the Visual Studio project, first remove any references to the AppFabric assemblies.
Next, backup the dataCacheClient sections of the web.config or app.config file.
Remove the dataCacheClient and other caching sections from the web.config or app.config file.
Next, prepare the project to use Microsoft Azure Cache. For more information, see How to: Prepare Visual Studio to Use Azure In-Role Cache.
Then, manually add the removed dataCacheClient sections back into the web.config or app.config file. The following changes must be made to these sections:
- Add an autoDiscover element to each section. The identifier attribute must reference the name of the role that hosts caching.
- Remove all hosts, host, and securityProperties elements. These are unnecessary and unsupported in Microsoft Azure Cache.
See Also
|
https://msdn.microsoft.com/en-us/library/azure/jj835079.aspx
|
CC-MAIN-2015-27
|
refinedweb
| 712
| 58.38
|
there should be a ; at the end of every statement.
you can declare cout and cin by including iostream
you can declare cout and cin by including iostream
using namespace std; int main() { int ch; do{ cout<<"Product list\n1 Chainsaw\n2 Whipper Snipper\n3 Lawn Mower\n4 Hedger\n5 Brushcutter\n6 Quit" cin>>ch; switch(ch){ case 1: cout<<"Chainsaw";break; case 2: cout<<"Whipper snipper";break; case 3: cout<<"Lawn mower";break; case 4: cout<<"Hedger";break; case 5: cout<<"Brushcutter";break; case 6: cout<<"Quit";exit(0); default: cout<<"Invalid choice";break; } }while(ch!=6); return 0; }
Join the community of 500,000 technology professionals and ask your questions.
Connect with top rated Experts
10 Experts available now in Live!
|
https://www.experts-exchange.com/questions/27643328/Simple-c-Assignment-coding.html
|
CC-MAIN-2016-50
|
refinedweb
| 123
| 55.58
|
[Java3D] New to java3D and need some help here
Hey guys, I'm still learning java3D and not even run any examples program
yet.
This is because tehre some error occured when I run the .class file.
After compile the example program that I get from net, it has no error and
successed to create the .class file. But when I run the program, it shows
this:
Exception in thread "main" java.lang.NoClassDefFoundError: Tetrahedron
(wrong name: org/jdesktop/j3d/examples/appearance/Tetrahed not sure whether I installed the java3D properly or not. Here are my
steps in setting up java3D.
1) Install jdk-1_5_0_06-nb-5_0-win
2)Install java3d-1_3_1-windows-i586-opengl-rt
3)install java3d-1_3_1-windows-i586-opengl-sdk
4)java3d-1_4_0-windows-i586
-copy the file in C:\j3d-140-win that I extracted out from a zip in
java3d-1_4_0-windows-i586 to :
C:\Program Files\Java\jre1.5.0_06\bin,
C:\Program Files\Java\jre1.5.0_06\lib\ext,
C:\Program Files \Java\jdk1.5.0_06\jre\bin,
C:\Program Files\Java\jdk1.5.0_06\jre\lib\ext
as what stated in the instruction in java3d-1_4_0
And I want to import my vrml file to java3D, I already texture mapped my
model in VRML and I want to ask, after imported to java3D, the texture
mapping will be corrupted and not same with what I get in VRML?
Because I texture mapped the model in 3DsMax and export to VRML file, then
import again to java3D, I'm affraid of the texture mapping will be changed
when importing to java3D.
Please help me, I'm new to java3D(just read some tutorial from net) and need
to complete my project as soon as possible. I'm frustating....
Thanks for helping and spending time read my question.
--
View this message in context:...
Sent from the java.net - java3d interest forum at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: interest-unsubscribe@java3d.dev.java.net
For additional commands, e-mail: interest-help@java3d.dev.java.net
|
https://www.java.net/node/654735
|
CC-MAIN-2015-18
|
refinedweb
| 342
| 65.93
|
Named Entity Recognition using LSTM in Keras
Named Entity Recognition is a form of NLP and is a technique for extracting information to identify the named entities like people, places, organizations within the raw text and classify them under predefined categories.
Introduction
Named Entity Recognition (NER) models can be used to identify the mentions of people, location, organization, times, company names, and so on. So the Named Entity Recognition model not only acts as a standard tool for information extraction but it also serves as a foundational and important preprocessing toll for many downstream applications like Machine Translation, Question-Answering, Customer Feedback Handling, and even Text Summarization.
Motivation Behind The Project
Human engineered features were domain-specific and rule-based and were tedious. But in recent years, Deep Learning empowered by the continuous real-valued vector representation and semantic composition through non-linear processing has been able to employ any hard system using the state of the art performance. This allows the machine to fed with raw data.
Named Entity Recognition dataset we are using in this project is very helpful/playful because when you are able to pick intent and custom-named entities from your own sentence with more features then, it helps you solve real business problems(like picking entities from Electronic Medical Records, etc).
Implementation
The pre-requisites for this project are some prior experience with python projects as well as understanding neural networks mainly Recurrent Neural Network (RNN). And we’ll implement this project on Jupyter Notebook.
Task 1: Import Modules
First, we will import the necessary python libraries or modules and helper function. We are gonna use mainly Keras API, and Tensorflow2 as back-end.
%matplotlib inline sets the background of matplotlib to inline because of which the output of plotting commands will be displayed inline within frontends like the Jupyter notebook, directly below the code cell.
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np np.random.seed(0) plt.style.use("ggplot") import tensorflow as tf print('Tensorflow version:', tf.__version__) print('GPU detected:', tf.config.list_physical_devices('GPU'))
Task 2: Load and Explore the NER Dataset
This dataset contains of course sentences in English but they also have corresponding annotations for each word. The sentence in dataset are encoded in Latin 1.
Essential info about entities:
- geo = Geographical Entity
- org = Organization
- per = Person
- gpe = Geopolitical Entity
- tim = Time indicator
- art = Artifact
- eve = Event
- nat = Natural Phenomenon
Total Words Count = 1354149
Target Data Column: “tag”
data = pd.read_csv("ner_dataset.csv", encoding="latin1") data = data.fillna(method="ffill") data.head(20)
Here, we have filled the missing values in dataset using “ffill” method. After reading dataset, we’ll see the first 20 entries in the dataset which looks like this;
fig. First 20 rows
Visualizing the Sentenceentence
The first sentence is ” Thousands of demonstrations have marched through London to protest ……..”. In the rightmost column, we can see the tag of words like B-Geo for London, Iraq, and Britain which signifies that these words are geolocation. POS means parts of speech, ignore it for now.
Now, let’s see the number of unique words in the corpus and the number of unique tags in our dataset using “nunique” function which is the helper function from the pandas library.
print("Unique words in corpus:", data['Word'].nunique()) print("Unique tags in corpus:", data['Tag'].nunique())
And it shows result like
Unique words in corpus: 35178 Unique tags in corpus: 17
No. of tags in corpus dataset is directly proportional to the number of classes which is 17 in our case and our input dimension is 35178.
Now, here we are going to create a list and use set method to get de-duplicated values within the “word” column. And we’ll append the corresponding padding named “Endpad”.
words = list(set(data["Word"].values)) words.append("ENDPAD") num_words = len(words)
Now, let’s do a similar process for our target variables or tags and we’ll see the no. of words as 35179(since padding is appended) and no. of tags as same before i.e. 17.
tags = list(set(data["Tag"].values)) num_tags = len(tags)
print(num_words, num_tags)
Now, we are gonna modify our dataset so that we can easily split our dataset into our feature matrix and the target vector. So we want to create two pools (containing 3 values) for each sentence so that the 1st value in the two pools is the word and the 2nd value is the POS(Part of Speech) and 3rd is Tag i.e. class name.
Task 3: Retrieve Sentences and Corresponding Tags
In this task, we are gonna create a class that will allow us to retrieve the sentences and their corresponding tags so that we can clearly define the input and output to our Neural Network Model.
We’ll start with 1 sentence and group them using the lambda function. We will select the sentence from “word” column, see their values, and convert them into list. We will repeat this process for Part Of Speech tag and Name Entity Recognition.
Then we’ll apply this aggregated function to our sentences. Later we’ll split that entire list into sub-lists.
class SentenceGetter(object): def __init__(self, data): self.n_sent = 1 self.data = data self.empty = False agg_func = lambda s: [(w, p, t) for w, p, t in zip(s["Word"].values.tolist(), s["POS"].values.tolist(), s["Tag"].values.tolist())] self.grouped = self.data.groupby("Sentence #").apply(agg_func) self.sentences = [s for s in self.grouped] def get_next(self): try: s = self.grouped["Sentence: {}".format(self.n_sent)] self.n_sent += 1 return s except: return None
Here, we’ll just use that class using getter method.
getter = SentenceGetter(data) sentences = getter.sentences
sentences[0]
Here, we can see the extracted first sentence which contains that list having three values.
fig.3rd Retrieved Sentence
So, I hope you get the idea now. And join me in the nest task to actually define our vocabulary by observing the words and frequencies of the word within our dataset.
Task 4: Define Mappings between Sentences and Tags
Here, we are going to build two dictionaries. one is to represent words as numerical values or unique indices and second is to representing our tags and assigning them unique indices.
word2idx = {w: i + 1 for i, w in enumerate(words)} tag2idx = {t: i for i, t in enumerate(tags)}
word2idx
Now, we can see that each word is assigned to a unique tag. We can retrieve these words using their indices and looking them up in our dictionary and returning the corresponding keys.
Mapping
Task 5: Padding Input Sentences and Creating Train/Test Splits
For the use of Neural Network at least with Keras and TF, we need to be able to use equal length sentences. So we are going to pad our input sentences to a prespecified length. But first, we need to figure out what that length is. One of the easiest heuristics for that is to look at the distribution of your sentence length within your corpus.
Let’s do this visually by plotting the histogram using matplotlib hist function. For each sentence, we are gonna take the length of the sentence and plot it in our histogram.
plt.hist([len(s) for s in sentences], bins=50) plt.show()
And the output looks like
fig. padding I/p
You can see in the plot that, the mean value of the distribution i.e the mean length of sentence in our dataset is around 20 t0 22 wordmark. And in X-axis we can see the safe value to be around 50, so we that most values in our dataset don’t need to be padded.
Padding
Now, in the next step, we’ll use pad_sequence helper function for padding. Then we’ll define the parameters like max_length equal to 50. X is going to be a numerical representation of our words. We’ll take iteratively each word in that sentence and get its corresponding values from our word to index dictionary that we created previously. We can use Python’s in-built word2idx list comprehension, thanks to Python for that.
Now we can make use of our pad_sequence helper function. ‘post’ is just a value of padding argument at the end of the sentence. y is our target vector. In y, we want to iterate through our sentences list for the words in the sentence and then retrieve tag to the index value. After that, all we have to do is pad y and convert it to categorical. At this point, we have successfully created our feature matrix and target vector.
from tensorflow.keras.preprocessing.sequence import pad_sequences max_len = 50 X = [[word2idx[w[0]] for w in s] for s in sentences] X = pad_sequences(maxlen=max_len, sequences=X, padding="post", value=num_words-1) y = [[tag2idx[w[2]] for w in s] for s in sentences] y = pad_sequences(maxlen=max_len, sequences=y, padding="post", value=tag2idx["O"])
The next task is to split the dataset into training and testing using sklearn library, which is the backbone of Machine Learning. test_size=0.2 means our 80% dataset is split for training and the remaining 20% for testing.
from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
Now, join me in the next task to actually get the most succeeding part of this project.
Task 6: Build and Compile a Bidirectional LSTM Model
Here, we’ll use Bidirectional RNN/LSTM instead of simple RNN because bidirectional RNN makes use of both the past and future information along with current information for a specific timeframe. allows a bit more flexibility.
from tensorflow.keras import Model, Input from tensorflow.keras.layers import LSTM, Embedding, Dense from tensorflow.keras.layers import TimeDistributed, SpatialDropout1D, Bidirectional
First, we need to define the input layer to our model and specify the shape to be max_length which is 5o. Then we are doing raw word embedding, not including Part Of Speech tag in this project. And the input dimension will be of course the no. of unique words in our vocabulary. We are applying this to our input layer and creating embedding from input_word.
Let’s apply a SpatialDropout layer here. What SpatialDropout does is it drops second value form all the channels. We are applying a dropout of 0.1, to our model from the previous layer and it drops the entire 1D feature map rather than dropping individual nodes.
Bidirectional LSTM
Now we can go ahead and create our Bidirectional LSTM. We are using LSTM rather than RNN because RNN suffers from vanishing gradient problems. The units(no. of times Bidirectional LSTM will train) is set reasonably high, 100 for now.
You can change these hyperparameters like changing units to 250, max_length to 100 but should result in more accuracy of the model.
And recurrent_dropout is set to a small value in the first few layers. We are applying this to the model of the previous layer so there’s a model argument. Then we are going to use TimeDistributed which accepts the Dense layer as an input and applies it to every temporal slice of the input. And another thing to note is we are applying this Dense layer 100 times.
Let’s use a softmax activation function which can be interpreted as a probability and look at the maximum argument of that return value and select the highest probability which corresponds to the predicted output class.
Now, let’s combine them and feed the input_word and output layer to the model and see it’s summary.
input_word = Input(shape=(max_len,)) model = Embedding(input_dim=num_words, output_dim=50, input_length=max_len)(input_word) model = SpatialDropout1D(0.1)(model) model = Bidirectional(LSTM(units=100, return_sequences=True, recurrent_dropout=0.1))(model) out = TimeDistributed(Dense(num_tags, activation="softmax"))(model) model = Model(input_word, out) model.summary()
Summary
The summary shows that we have 1.88 million of parameters to be trained.
Now let’s compile our model and specify the loss function, the matrix we want to track, and the optimizer function. We’ll use adam optimizer here, sparce_categorical_crossentropy as the loss function and the matrix we gonna concern is accuracy matrix.
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
Now join me in the next task where we’ll use some callbacks and finish our training so that we can evaluate it on our test set to see how well is our model doing and naming the entity with tags.
Task 7: Train the Model
Specifically, we will use EarlyStopping and PlotLossesCallback. We are not going to save the ModelCheckPoint as of now, maybe in future improvement of this project.
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from livelossplot.tf_keras import PlotLossesCallback
Let’s go ahead and instantiate our callback. Use EarlyStopping so that we don’t need to hard code the number of epochs. If our network doesn’t improve for 2 consecutive epochs,i.e. validation loss is not decreased we are going to stop our training process. That is the meaning of patience.
Accuracy
Then we have to monitor the validation accuracy, patience is already mentioned above and we’ll set the verbose equal to be 0 so that we don’t get any output. The mode is set to ‘max’ as we want to maximize the validation accuracy.
And we will make use of PlotLossesCallback which is one of my favorite callback as it eliminates the need for you to go outside the Jupyter Notebook to plot the model’s training and progress matrix. Everything is updated live within the Jupyter Notebook. Let’s put it on our callback list.
Train the Model
Then all we have left to start training is to call model.fit() , and we’ll pass our training data which is x_train and y_train. Then we’ll create our validation data by further splitting training data. You can increase the batch_size if you have GPU of more memory size. Here we will use just 3 epochs as it takes more than 10 to 15 minutes to train the model if we use more epochs.
%%time chkpt = ModelCheckpoint("model_weights.h5", monitor='val_loss',verbose=1, save_best_only=True, save_weights_only=True, mode='min') early_stopping = EarlyStopping(monitor='val_accuracy', min_delta=0, patience=2, verbose=0, mode='max', baseline=None, restore_best_weights=False) callbacks = [PlotLossesCallback(), chkpt, early_stopping] history = model.fit( x=x_train, y=y_train, validation_data=(x_test,y_test), batch_size=32, epochs=3, callbacks=callbacks, verbose=1 )
Then shift+Enter to run that cell.
Accuracy
We can see at the bottom right end that the accuracy of our model is more than 98% .
Task 8: Evaluate Named Entity Recognition Model
Here, we will evaluate our model on unbiased test data that the model hasn’t seen before and then perform prediction.
model.evaluate(x_test, y_test)
And we got pretty good accuracy at more than 98%.
Now let’s take a look at some prediction. Create a table where the leftmost column is for words from our test set and the second column is for the true value for the tags and third entry is gonna be our model’s predicted tag.
We are going to create an index to select values from our test set and index i can take random values from 0 to max no. of entries in our dataset.
Prediction
Let’s get our model’s prediction, store that in p, call the predict method on model and feed test set to it. And remember that we are choosing ith example from test set.
This is going to return one hot encoded matrix and we are just going to select the argmax from the relevant axis. SO the first 3 lines of code take care of picking a random example and generating our model’s prediction on that random example. Then look at the true values that our model is trying to predict. y_test has the true value but we need to convert that into NumPy array and then specify axis argument to -1.
Then finally, let’s define the pattern of our result.
i = np.random.randint(0, x_test.shape[0]) #659 p = model.predict(np.array([x_test[i]])) p = np.argmax(p, axis=-1) y_true = y_test[i] print("{:15}{:5}\t {}\n".format("Word", "True", "Pred")) print("-" *30) for w, true, pred in zip(x_test[i], y_true, p[0]): print("{:15}{}\t{}".format(words[w-1], tags[true], tags[pred]))
The final result of our project Named Entity Recognition looks like below
fig. Final Result
Now let’s discuss the goal of our project. As the above result shows the United Nations is an organization, Ituri is geolocation and other words are not from a specific category so their Tag is 0 i.e. neutral words.
Applications of Named Entity Recognition Model:
-Classifying content for news providers,
-Automating the Recommendation System,
-Segregating Research papers on the basis of relevant entities,
-Customer Feedback Handling in big companies, services,
-Efficient search algorithms to search all words in millions of articles.
That’s it.
Thank you for reading and you can download the Source Code from Github: Tekraj Github
You can Reach me at Linkedin, Github , Twitter , Gmail also read my previous article on
Anomaly Detection in Time Series Data using Keras: Click Here
|
https://valueml.com/named-entity-recognition-using-lstm-in-keras/
|
CC-MAIN-2021-25
|
refinedweb
| 2,886
| 54.83
|
The @protocol synchronization
By Muralidharan Padmanaban, Naresh Gurijala and Intiser Ahmed
What is synchronization?
In the @protocol world, your personal data is encrypted with your own private key and stored on your mobile device. Periodically, this data is copied securely over to a dedicated cloud server which only you can decrypt and read since you have sole access to your private key. Nobody else, including The @ Company can read your data. The process of maintaining identical copies of this data on the server and your mobile devices is known as synchronization.
Commit Log
Commit logs play a crucial role in the @protocol synchronization process. When you perform an action (create, update, delete) in an @pp, the changes get saved in your handheld device as key-value pairs. Each key-value pair is assigned a commit ID from the server, which is a unique number returned from the server for an update/delete operation in the @pp.
Real-time Synchronization in the @protocol
There are two main scenarios that utilize synchronization. The first scenario is when your device is offline. Updates are saved in your device but not synced to the server. If the device goes online, the saved updates are then synced to the server. The second scenario is when you have multiple devices: one device is offline, and the other is online. Updates from the device which is online are periodically synced to the server, but the device that is offline will not pull these changes from the server until it goes online. For example, let’s say you are reading this article on your handheld device and you’ve added 5 claps. The number of added claps will be synced to the cloud server. If you view the same article from another handheld device which was offline then comes back online, you will see that the claps number is updated.
Sync from device to secondary server.
Let’s describe a use case: @alice inserts her phone number, which is saved as a key-value pair on the device and added as an entry to the device’s commit log without a commit ID. When the @pp checks the latest commit on the server, it receives a null response because the server has not created a commit ID yet. Then the @pp sends an update command to the server to update the phone number @alice inserted. The server commit log saves this update with a commit ID and returns the commit ID to the local (handheld device) commit log. The @pp updates the commit log with the commit ID sent from the server. Now, the device and server have identical commit logs and are synchronized.
Sync from secondary server to device
In this scenario, the cloud has the latest changes but the local storage is out of sync.
Let’s describe a use case: @alice has two devices. One device is offline, and the other is online. On both devices, the latest commit ID is 2. On device #2, which is online, @alice updates her location value as ‘california,’ creating a new key-value pair. The @pp syncs this key from device #2 to the server, updating the server’s commit ID with the latest value, 3. Now, device #1, which was offline, comes back online. The latest commit ID in this device is 2, whereas the latest commit ID on the server is 3. To resolve this discrepancy, the @pp updates device #1 with the unsynced key, adding a commit entry. After synchronization, both device #1 and device #2 should have 3 as their latest commit ID.
Syncing only @pp specific data
The same @sign can be used to log into multiple @pps, each of which has a unique namespace. The @pp can sync data in two different ways: (1) syncing all the keys (different namespaces) and (2) syncing keys that are specific to an @pp(single namespace). The default behavior is syncing all the keys, but an @pp can override the default behavior by syncing only specific keys. To achieve this, we have to provide regex to sync data from the remote secondary. Ideally, app developers should use the app namespace as regex.
Device offline sync
Apps built on the @protocol remain responsive even when offline. When you update information on a device that is offline, the key-value pair is stored in the device. Once connectivity is reestablished, your device sends an update to the server, synchronizing it with the current server state.
Let’s describe a use case: @alice’s mobile device goes offline. After that, @alice adds her email address, which is saved on @alice’s mobile device as a key-value pair without any commit ID. When @alice’s device comes back online, a manual sync is triggered, sending an update command to the server, which in turn responds with a commit ID. The commit ID returned by the server is then updated on the local commit log on @alice’s device, ensuring that the device and server are in sync.
Data Resiliency in the @protocol
Data resiliency is the ability to recover data in situations when an @pp is uninstalled or a device is reset, including when your device is lost or the device suddenly resets. With the @protocol, you receive a key file during the initial onboarding process. When your device is reset and the @pp is reinstalled, the synchronization process ensures that your previous data is pulled from the server to the @pp so that the @pp can be restored to its original state.
Performance considerations
When data is synced from the cloud server to an @pp or vice versa, there are several performance considerations that have to be factored in to make the sync process seamless. We will cover the performance aspects in a separate article.
Authors
Intiser Ahmed (intiser@atsign.com) is a Student Ambassador for The @ Company
Murali Dharan (murali@atsign.com) is a Backend Developer for The @ Company
Naresh Reddy Gurijala (naresh@atsign.com) is a Senior Software Engineer for The @ Company
Learn more about The @ Company from our GitHub repo.
|
https://atsigncompany.medium.com/the-protocol-synchronization-77b00ca5341b?source=post_page-----77b00ca5341b-----------------------------------
|
CC-MAIN-2022-21
|
refinedweb
| 1,015
| 61.67
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.